Jan 27 18:01:25 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 27 18:01:25 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 27 18:01:25 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 18:01:25 localhost kernel: BIOS-provided physical RAM map:
Jan 27 18:01:25 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 27 18:01:25 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 27 18:01:25 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 27 18:01:25 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 27 18:01:25 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 27 18:01:25 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 27 18:01:25 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 27 18:01:25 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 27 18:01:25 localhost kernel: NX (Execute Disable) protection: active
Jan 27 18:01:25 localhost kernel: APIC: Static calls initialized
Jan 27 18:01:25 localhost kernel: SMBIOS 2.8 present.
Jan 27 18:01:25 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 27 18:01:25 localhost kernel: Hypervisor detected: KVM
Jan 27 18:01:25 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 27 18:01:25 localhost kernel: kvm-clock: using sched offset of 4429582960 cycles
Jan 27 18:01:25 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 27 18:01:25 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 27 18:01:25 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 27 18:01:25 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 27 18:01:25 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 27 18:01:25 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 27 18:01:25 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 27 18:01:25 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 27 18:01:25 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 27 18:01:25 localhost kernel: Using GB pages for direct mapping
Jan 27 18:01:25 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 27 18:01:25 localhost kernel: ACPI: Early table checksum verification disabled
Jan 27 18:01:25 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 27 18:01:25 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 18:01:25 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 18:01:25 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 18:01:25 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 27 18:01:25 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 18:01:25 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 18:01:25 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 27 18:01:25 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 27 18:01:25 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 27 18:01:25 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 27 18:01:25 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 27 18:01:25 localhost kernel: No NUMA configuration found
Jan 27 18:01:25 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 27 18:01:25 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 27 18:01:25 localhost kernel: crashkernel reserved: 0x00000000a1000000 - 0x00000000b1000000 (256 MB)
Jan 27 18:01:25 localhost kernel: Zone ranges:
Jan 27 18:01:25 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 27 18:01:25 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 27 18:01:25 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 18:01:25 localhost kernel:   Device   empty
Jan 27 18:01:25 localhost kernel: Movable zone start for each node
Jan 27 18:01:25 localhost kernel: Early memory node ranges
Jan 27 18:01:25 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 27 18:01:25 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 27 18:01:25 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 18:01:25 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 27 18:01:25 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 27 18:01:25 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 27 18:01:25 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 27 18:01:25 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 27 18:01:25 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 27 18:01:25 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 27 18:01:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 27 18:01:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 27 18:01:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 27 18:01:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 27 18:01:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 27 18:01:25 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 27 18:01:25 localhost kernel: TSC deadline timer available
Jan 27 18:01:25 localhost kernel: CPU topo: Max. logical packages:   8
Jan 27 18:01:25 localhost kernel: CPU topo: Max. logical dies:       8
Jan 27 18:01:25 localhost kernel: CPU topo: Max. dies per package:   1
Jan 27 18:01:25 localhost kernel: CPU topo: Max. threads per core:   1
Jan 27 18:01:25 localhost kernel: CPU topo: Num. cores per package:     1
Jan 27 18:01:25 localhost kernel: CPU topo: Num. threads per package:   1
Jan 27 18:01:25 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 27 18:01:25 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 27 18:01:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 27 18:01:25 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 27 18:01:25 localhost kernel: Booting paravirtualized kernel on KVM
Jan 27 18:01:25 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 27 18:01:25 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 27 18:01:25 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 27 18:01:25 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 27 18:01:25 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 27 18:01:25 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 27 18:01:25 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 18:01:25 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 27 18:01:25 localhost kernel: random: crng init done
Jan 27 18:01:25 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 27 18:01:25 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 27 18:01:25 localhost kernel: Fallback order for Node 0: 0 
Jan 27 18:01:25 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 27 18:01:25 localhost kernel: Policy zone: Normal
Jan 27 18:01:25 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 27 18:01:25 localhost kernel: software IO TLB: area num 8.
Jan 27 18:01:25 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 27 18:01:25 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 27 18:01:25 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 27 18:01:25 localhost kernel: Dynamic Preempt: voluntary
Jan 27 18:01:25 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 27 18:01:25 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 27 18:01:25 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 27 18:01:25 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 27 18:01:25 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 27 18:01:25 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 27 18:01:25 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 27 18:01:25 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 27 18:01:25 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 18:01:25 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 18:01:25 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 18:01:25 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 27 18:01:25 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 27 18:01:25 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 27 18:01:25 localhost kernel: Console: colour VGA+ 80x25
Jan 27 18:01:25 localhost kernel: printk: console [ttyS0] enabled
Jan 27 18:01:25 localhost kernel: ACPI: Core revision 20230331
Jan 27 18:01:25 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 27 18:01:25 localhost kernel: x2apic enabled
Jan 27 18:01:25 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 27 18:01:25 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 27 18:01:25 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 27 18:01:25 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 27 18:01:25 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 27 18:01:25 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 27 18:01:25 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 27 18:01:25 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 27 18:01:25 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 27 18:01:25 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 27 18:01:25 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 27 18:01:25 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 27 18:01:25 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 27 18:01:25 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 27 18:01:25 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 27 18:01:25 localhost kernel: x86/bugs: return thunk changed
Jan 27 18:01:25 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 27 18:01:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 27 18:01:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 27 18:01:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 27 18:01:25 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 27 18:01:25 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 27 18:01:25 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 27 18:01:25 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 27 18:01:25 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 27 18:01:25 localhost kernel: landlock: Up and running.
Jan 27 18:01:25 localhost kernel: Yama: becoming mindful.
Jan 27 18:01:25 localhost kernel: SELinux:  Initializing.
Jan 27 18:01:25 localhost kernel: LSM support for eBPF active
Jan 27 18:01:25 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 18:01:25 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 18:01:25 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 27 18:01:25 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 27 18:01:25 localhost kernel: ... version:                0
Jan 27 18:01:25 localhost kernel: ... bit width:              48
Jan 27 18:01:25 localhost kernel: ... generic registers:      6
Jan 27 18:01:25 localhost kernel: ... value mask:             0000ffffffffffff
Jan 27 18:01:25 localhost kernel: ... max period:             00007fffffffffff
Jan 27 18:01:25 localhost kernel: ... fixed-purpose events:   0
Jan 27 18:01:25 localhost kernel: ... event mask:             000000000000003f
Jan 27 18:01:25 localhost kernel: signal: max sigframe size: 1776
Jan 27 18:01:25 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 27 18:01:25 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 27 18:01:25 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 27 18:01:25 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 27 18:01:25 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 27 18:01:25 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 27 18:01:25 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 27 18:01:25 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 27 18:01:25 localhost kernel: Memory: 7763552K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618368K reserved, 0K cma-reserved)
Jan 27 18:01:25 localhost kernel: devtmpfs: initialized
Jan 27 18:01:25 localhost kernel: x86/mm: Memory block size: 128MB
Jan 27 18:01:25 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 27 18:01:25 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 27 18:01:25 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 27 18:01:25 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 27 18:01:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 27 18:01:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 27 18:01:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 27 18:01:25 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 27 18:01:25 localhost kernel: audit: type=2000 audit(1769536883.989:1): state=initialized audit_enabled=0 res=1
Jan 27 18:01:25 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 27 18:01:25 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 27 18:01:25 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 27 18:01:25 localhost kernel: cpuidle: using governor menu
Jan 27 18:01:25 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 27 18:01:25 localhost kernel: PCI: Using configuration type 1 for base access
Jan 27 18:01:25 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 27 18:01:25 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 27 18:01:25 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 27 18:01:25 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 27 18:01:25 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 27 18:01:25 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 27 18:01:25 localhost kernel: Demotion targets for Node 0: null
Jan 27 18:01:25 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 27 18:01:25 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 27 18:01:25 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 27 18:01:25 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 27 18:01:25 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 27 18:01:25 localhost kernel: ACPI: Interpreter enabled
Jan 27 18:01:25 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 27 18:01:25 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 27 18:01:25 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 27 18:01:25 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 27 18:01:25 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 27 18:01:25 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 27 18:01:25 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [3] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [4] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [5] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [6] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [7] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [8] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [9] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [10] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [11] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [12] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [13] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [14] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [15] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [16] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [17] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [18] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [19] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [20] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [21] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [22] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [23] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [24] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [25] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [26] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [27] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [28] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [29] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [30] registered
Jan 27 18:01:25 localhost kernel: acpiphp: Slot [31] registered
Jan 27 18:01:25 localhost kernel: PCI host bridge to bus 0000:00
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 27 18:01:25 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 27 18:01:25 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 27 18:01:25 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 27 18:01:25 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 27 18:01:25 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 27 18:01:25 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 27 18:01:25 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 27 18:01:25 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 27 18:01:25 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 27 18:01:25 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 27 18:01:25 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 18:01:25 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 27 18:01:25 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 27 18:01:25 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 27 18:01:25 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 27 18:01:25 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 27 18:01:25 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 27 18:01:25 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 27 18:01:25 localhost kernel: iommu: Default domain type: Translated
Jan 27 18:01:25 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 27 18:01:25 localhost kernel: SCSI subsystem initialized
Jan 27 18:01:25 localhost kernel: ACPI: bus type USB registered
Jan 27 18:01:25 localhost kernel: usbcore: registered new interface driver usbfs
Jan 27 18:01:25 localhost kernel: usbcore: registered new interface driver hub
Jan 27 18:01:25 localhost kernel: usbcore: registered new device driver usb
Jan 27 18:01:25 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 27 18:01:25 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 27 18:01:25 localhost kernel: PTP clock support registered
Jan 27 18:01:25 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 27 18:01:25 localhost kernel: NetLabel: Initializing
Jan 27 18:01:25 localhost kernel: NetLabel:  domain hash size = 128
Jan 27 18:01:25 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 27 18:01:25 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 27 18:01:25 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 27 18:01:25 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 27 18:01:25 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 27 18:01:25 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 27 18:01:25 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 27 18:01:25 localhost kernel: vgaarb: loaded
Jan 27 18:01:25 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 27 18:01:25 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 27 18:01:25 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 27 18:01:25 localhost kernel: pnp: PnP ACPI init
Jan 27 18:01:25 localhost kernel: pnp 00:03: [dma 2]
Jan 27 18:01:25 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 27 18:01:25 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 27 18:01:25 localhost kernel: NET: Registered PF_INET protocol family
Jan 27 18:01:25 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 27 18:01:25 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 27 18:01:25 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 27 18:01:25 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 27 18:01:25 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 27 18:01:25 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 27 18:01:25 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 27 18:01:25 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 18:01:25 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 18:01:25 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 27 18:01:25 localhost kernel: NET: Registered PF_XDP protocol family
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 27 18:01:25 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 27 18:01:25 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 27 18:01:25 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 27 18:01:25 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71940 usecs
Jan 27 18:01:25 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 27 18:01:25 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 27 18:01:25 localhost kernel: software IO TLB: mapped [mem 0x00000000bbfdb000-0x00000000bffdb000] (64MB)
Jan 27 18:01:25 localhost kernel: ACPI: bus type thunderbolt registered
Jan 27 18:01:25 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 27 18:01:25 localhost kernel: Initialise system trusted keyrings
Jan 27 18:01:25 localhost kernel: Key type blacklist registered
Jan 27 18:01:25 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 27 18:01:25 localhost kernel: zbud: loaded
Jan 27 18:01:25 localhost kernel: integrity: Platform Keyring initialized
Jan 27 18:01:25 localhost kernel: integrity: Machine keyring initialized
Jan 27 18:01:25 localhost kernel: Freeing initrd memory: 87956K
Jan 27 18:01:25 localhost kernel: NET: Registered PF_ALG protocol family
Jan 27 18:01:25 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 27 18:01:25 localhost kernel: Key type asymmetric registered
Jan 27 18:01:25 localhost kernel: Asymmetric key parser 'x509' registered
Jan 27 18:01:25 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 27 18:01:25 localhost kernel: io scheduler mq-deadline registered
Jan 27 18:01:25 localhost kernel: io scheduler kyber registered
Jan 27 18:01:25 localhost kernel: io scheduler bfq registered
Jan 27 18:01:25 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 27 18:01:25 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 27 18:01:25 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 27 18:01:25 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 27 18:01:25 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 27 18:01:25 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 27 18:01:25 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 27 18:01:25 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 27 18:01:25 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 27 18:01:25 localhost kernel: Non-volatile memory driver v1.3
Jan 27 18:01:25 localhost kernel: rdac: device handler registered
Jan 27 18:01:25 localhost kernel: hp_sw: device handler registered
Jan 27 18:01:25 localhost kernel: emc: device handler registered
Jan 27 18:01:25 localhost kernel: alua: device handler registered
Jan 27 18:01:25 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 27 18:01:25 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 27 18:01:25 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 27 18:01:25 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 27 18:01:25 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 27 18:01:25 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 27 18:01:25 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 27 18:01:25 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 27 18:01:25 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 27 18:01:25 localhost kernel: hub 1-0:1.0: USB hub found
Jan 27 18:01:25 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 27 18:01:25 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 27 18:01:25 localhost kernel: usbserial: USB Serial support registered for generic
Jan 27 18:01:25 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 27 18:01:25 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 27 18:01:25 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 27 18:01:25 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 27 18:01:25 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 27 18:01:25 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 27 18:01:25 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 27 18:01:25 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-27T18:01:24 UTC (1769536884)
Jan 27 18:01:25 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 27 18:01:25 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 27 18:01:25 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 27 18:01:25 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 27 18:01:25 localhost kernel: usbcore: registered new interface driver usbhid
Jan 27 18:01:25 localhost kernel: usbhid: USB HID core driver
Jan 27 18:01:25 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 27 18:01:25 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 27 18:01:25 localhost kernel: Initializing XFRM netlink socket
Jan 27 18:01:25 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 27 18:01:25 localhost kernel: Segment Routing with IPv6
Jan 27 18:01:25 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 27 18:01:25 localhost kernel: mpls_gso: MPLS GSO support
Jan 27 18:01:25 localhost kernel: IPI shorthand broadcast: enabled
Jan 27 18:01:25 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 27 18:01:25 localhost kernel: AES CTR mode by8 optimization enabled
Jan 27 18:01:25 localhost kernel: sched_clock: Marking stable (1167002219, 146580734)->(1391703814, -78120861)
Jan 27 18:01:25 localhost kernel: registered taskstats version 1
Jan 27 18:01:25 localhost kernel: Loading compiled-in X.509 certificates
Jan 27 18:01:25 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 18:01:25 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 27 18:01:25 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 27 18:01:25 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 27 18:01:25 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 27 18:01:25 localhost kernel: Demotion targets for Node 0: null
Jan 27 18:01:25 localhost kernel: page_owner is disabled
Jan 27 18:01:25 localhost kernel: Key type .fscrypt registered
Jan 27 18:01:25 localhost kernel: Key type fscrypt-provisioning registered
Jan 27 18:01:25 localhost kernel: Key type big_key registered
Jan 27 18:01:25 localhost kernel: Key type encrypted registered
Jan 27 18:01:25 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 27 18:01:25 localhost kernel: Loading compiled-in module X.509 certificates
Jan 27 18:01:25 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 18:01:25 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 27 18:01:25 localhost kernel: ima: No architecture policies found
Jan 27 18:01:25 localhost kernel: evm: Initialising EVM extended attributes:
Jan 27 18:01:25 localhost kernel: evm: security.selinux
Jan 27 18:01:25 localhost kernel: evm: security.SMACK64 (disabled)
Jan 27 18:01:25 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 27 18:01:25 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 27 18:01:25 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 27 18:01:25 localhost kernel: evm: security.apparmor (disabled)
Jan 27 18:01:25 localhost kernel: evm: security.ima
Jan 27 18:01:25 localhost kernel: evm: security.capability
Jan 27 18:01:25 localhost kernel: evm: HMAC attrs: 0x1
Jan 27 18:01:25 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 27 18:01:25 localhost kernel: Running certificate verification RSA selftest
Jan 27 18:01:25 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 27 18:01:25 localhost kernel: Running certificate verification ECDSA selftest
Jan 27 18:01:25 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 27 18:01:25 localhost kernel: clk: Disabling unused clocks
Jan 27 18:01:25 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 27 18:01:25 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 27 18:01:25 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 27 18:01:25 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 27 18:01:25 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 27 18:01:25 localhost kernel: Run /init as init process
Jan 27 18:01:25 localhost kernel:   with arguments:
Jan 27 18:01:25 localhost kernel:     /init
Jan 27 18:01:25 localhost kernel:   with environment:
Jan 27 18:01:25 localhost kernel:     HOME=/
Jan 27 18:01:25 localhost kernel:     TERM=linux
Jan 27 18:01:25 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 27 18:01:25 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 18:01:25 localhost systemd[1]: Detected virtualization kvm.
Jan 27 18:01:25 localhost systemd[1]: Detected architecture x86-64.
Jan 27 18:01:25 localhost systemd[1]: Running in initrd.
Jan 27 18:01:25 localhost systemd[1]: No hostname configured, using default hostname.
Jan 27 18:01:25 localhost systemd[1]: Hostname set to <localhost>.
Jan 27 18:01:25 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 27 18:01:25 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 27 18:01:25 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 27 18:01:25 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 27 18:01:25 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 27 18:01:25 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 27 18:01:25 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 27 18:01:25 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 27 18:01:25 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 27 18:01:25 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 18:01:25 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 18:01:25 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 27 18:01:25 localhost systemd[1]: Reached target Local File Systems.
Jan 27 18:01:25 localhost systemd[1]: Reached target Path Units.
Jan 27 18:01:25 localhost systemd[1]: Reached target Slice Units.
Jan 27 18:01:25 localhost systemd[1]: Reached target Swaps.
Jan 27 18:01:25 localhost systemd[1]: Reached target Timer Units.
Jan 27 18:01:25 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 18:01:25 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 27 18:01:25 localhost systemd[1]: Listening on Journal Socket.
Jan 27 18:01:25 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 18:01:25 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 18:01:25 localhost systemd[1]: Reached target Socket Units.
Jan 27 18:01:25 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 18:01:25 localhost systemd[1]: Starting Journal Service...
Jan 27 18:01:25 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 18:01:25 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 18:01:25 localhost systemd[1]: Starting Create System Users...
Jan 27 18:01:25 localhost systemd[1]: Starting Setup Virtual Console...
Jan 27 18:01:25 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 18:01:25 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 18:01:25 localhost systemd[1]: Finished Create System Users.
Jan 27 18:01:25 localhost systemd-journald[305]: Journal started
Jan 27 18:01:25 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/8c3eef75502c49c8ac4b40d0b3f964e2) is 8.0M, max 153.6M, 145.6M free.
Jan 27 18:01:25 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 27 18:01:25 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 27 18:01:25 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 27 18:01:25 localhost systemd[1]: Started Journal Service.
Jan 27 18:01:25 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 18:01:25 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 18:01:25 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 18:01:25 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 18:01:25 localhost systemd[1]: Finished Setup Virtual Console.
Jan 27 18:01:25 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 27 18:01:25 localhost systemd[1]: Starting dracut cmdline hook...
Jan 27 18:01:25 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Jan 27 18:01:25 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 18:01:25 localhost systemd[1]: Finished dracut cmdline hook.
Jan 27 18:01:25 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 27 18:01:25 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 27 18:01:25 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 27 18:01:25 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 27 18:01:25 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 27 18:01:25 localhost kernel: RPC: Registered udp transport module.
Jan 27 18:01:25 localhost kernel: RPC: Registered tcp transport module.
Jan 27 18:01:25 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 27 18:01:25 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 27 18:01:25 localhost rpc.statd[441]: Version 2.5.4 starting
Jan 27 18:01:25 localhost rpc.statd[441]: Initializing NSM state
Jan 27 18:01:25 localhost rpc.idmapd[446]: Setting log level to 0
Jan 27 18:01:25 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 27 18:01:25 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 18:01:25 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 18:01:25 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 18:01:25 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 27 18:01:25 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 27 18:01:25 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 18:01:25 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 27 18:01:25 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 18:01:25 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 18:01:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 18:01:25 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 18:01:25 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 18:01:25 localhost systemd[1]: Reached target Network.
Jan 27 18:01:25 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 18:01:25 localhost systemd[1]: Starting dracut initqueue hook...
Jan 27 18:01:25 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 27 18:01:25 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 27 18:01:25 localhost kernel:  vda: vda1
Jan 27 18:01:25 localhost kernel: libata version 3.00 loaded.
Jan 27 18:01:25 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 27 18:01:25 localhost kernel: scsi host0: ata_piix
Jan 27 18:01:25 localhost systemd-udevd[488]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 18:01:25 localhost kernel: scsi host1: ata_piix
Jan 27 18:01:25 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 27 18:01:25 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 27 18:01:25 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 18:01:26 localhost systemd[1]: Reached target Initrd Root Device.
Jan 27 18:01:26 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 27 18:01:26 localhost kernel: ata1: found unknown device (class 0)
Jan 27 18:01:26 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 27 18:01:26 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 27 18:01:26 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 27 18:01:26 localhost systemd[1]: Reached target System Initialization.
Jan 27 18:01:26 localhost systemd[1]: Reached target Basic System.
Jan 27 18:01:26 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 27 18:01:26 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 27 18:01:26 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 27 18:01:26 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 27 18:01:26 localhost systemd[1]: Finished dracut initqueue hook.
Jan 27 18:01:26 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 18:01:26 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 27 18:01:26 localhost systemd[1]: Reached target Remote File Systems.
Jan 27 18:01:26 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 27 18:01:26 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 27 18:01:26 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 27 18:01:26 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Jan 27 18:01:26 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 18:01:26 localhost systemd[1]: Mounting /sysroot...
Jan 27 18:01:26 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 27 18:01:26 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 27 18:01:26 localhost kernel: XFS (vda1): Ending clean mount
Jan 27 18:01:26 localhost systemd[1]: Mounted /sysroot.
Jan 27 18:01:26 localhost systemd[1]: Reached target Initrd Root File System.
Jan 27 18:01:26 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 27 18:01:26 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 27 18:01:26 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 27 18:01:26 localhost systemd[1]: Reached target Initrd File Systems.
Jan 27 18:01:26 localhost systemd[1]: Reached target Initrd Default Target.
Jan 27 18:01:26 localhost systemd[1]: Starting dracut mount hook...
Jan 27 18:01:26 localhost systemd[1]: Finished dracut mount hook.
Jan 27 18:01:26 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 27 18:01:27 localhost rpc.idmapd[446]: exiting on signal 15
Jan 27 18:01:27 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 27 18:01:27 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 27 18:01:27 localhost systemd[1]: Stopped target Network.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Timer Units.
Jan 27 18:01:27 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 27 18:01:27 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Basic System.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Path Units.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Remote File Systems.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Slice Units.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Socket Units.
Jan 27 18:01:27 localhost systemd[1]: Stopped target System Initialization.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Local File Systems.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Swaps.
Jan 27 18:01:27 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped dracut mount hook.
Jan 27 18:01:27 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 27 18:01:27 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 27 18:01:27 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 27 18:01:27 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 27 18:01:27 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 27 18:01:27 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 27 18:01:27 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 27 18:01:27 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 27 18:01:27 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 27 18:01:27 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 27 18:01:27 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 27 18:01:27 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 27 18:01:27 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Closed udev Control Socket.
Jan 27 18:01:27 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Closed udev Kernel Socket.
Jan 27 18:01:27 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 27 18:01:27 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 27 18:01:27 localhost systemd[1]: Starting Cleanup udev Database...
Jan 27 18:01:27 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 27 18:01:27 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 27 18:01:27 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Stopped Create System Users.
Jan 27 18:01:27 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 27 18:01:27 localhost systemd[1]: Finished Cleanup udev Database.
Jan 27 18:01:27 localhost systemd[1]: Reached target Switch Root.
Jan 27 18:01:27 localhost systemd[1]: Starting Switch Root...
Jan 27 18:01:27 localhost systemd[1]: Switching root.
Jan 27 18:01:27 localhost systemd-journald[305]: Journal stopped
Jan 27 18:01:28 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Jan 27 18:01:28 localhost kernel: audit: type=1404 audit(1769536887.399:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 27 18:01:28 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:01:28 localhost kernel: SELinux:  policy capability open_perms=1
Jan 27 18:01:28 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:01:28 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:01:28 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:01:28 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:01:28 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:01:28 localhost kernel: audit: type=1403 audit(1769536887.532:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 27 18:01:28 localhost systemd[1]: Successfully loaded SELinux policy in 136.322ms.
Jan 27 18:01:28 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.758ms.
Jan 27 18:01:28 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 18:01:28 localhost systemd[1]: Detected virtualization kvm.
Jan 27 18:01:28 localhost systemd[1]: Detected architecture x86-64.
Jan 27 18:01:28 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:01:28 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 27 18:01:28 localhost systemd[1]: Stopped Switch Root.
Jan 27 18:01:28 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 27 18:01:28 localhost systemd[1]: Created slice Slice /system/getty.
Jan 27 18:01:28 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 27 18:01:28 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 27 18:01:28 localhost systemd[1]: Created slice User and Session Slice.
Jan 27 18:01:28 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 18:01:28 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 27 18:01:28 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 27 18:01:28 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 18:01:28 localhost systemd[1]: Stopped target Switch Root.
Jan 27 18:01:28 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 27 18:01:28 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 27 18:01:28 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 27 18:01:28 localhost systemd[1]: Reached target Path Units.
Jan 27 18:01:28 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 27 18:01:28 localhost systemd[1]: Reached target Slice Units.
Jan 27 18:01:28 localhost systemd[1]: Reached target Swaps.
Jan 27 18:01:28 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 27 18:01:28 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 27 18:01:28 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 27 18:01:28 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 27 18:01:28 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 27 18:01:28 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 18:01:28 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 18:01:28 localhost systemd[1]: Mounting Huge Pages File System...
Jan 27 18:01:28 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 27 18:01:28 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 27 18:01:28 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 27 18:01:28 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 18:01:28 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 18:01:28 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 18:01:28 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 27 18:01:28 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 27 18:01:28 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 27 18:01:28 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 27 18:01:28 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 27 18:01:28 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 27 18:01:28 localhost systemd[1]: Stopped Journal Service.
Jan 27 18:01:28 localhost kernel: fuse: init (API version 7.37)
Jan 27 18:01:28 localhost systemd[1]: Starting Journal Service...
Jan 27 18:01:28 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 18:01:28 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 27 18:01:28 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 18:01:28 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 27 18:01:28 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 27 18:01:28 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 18:01:28 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 18:01:28 localhost systemd[1]: Mounted Huge Pages File System.
Jan 27 18:01:28 localhost systemd-journald[680]: Journal started
Jan 27 18:01:28 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 18:01:27 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 27 18:01:27 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 27 18:01:28 localhost systemd[1]: Started Journal Service.
Jan 27 18:01:28 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 27 18:01:28 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 27 18:01:28 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 27 18:01:28 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 27 18:01:28 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 18:01:28 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 18:01:28 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 18:01:28 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 27 18:01:28 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 27 18:01:28 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 27 18:01:28 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 27 18:01:28 localhost kernel: ACPI: bus type drm_connector registered
Jan 27 18:01:28 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 27 18:01:28 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 27 18:01:28 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 27 18:01:28 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 27 18:01:28 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 27 18:01:28 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 18:01:28 localhost systemd[1]: Mounting FUSE Control File System...
Jan 27 18:01:28 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 18:01:28 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 27 18:01:28 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 27 18:01:28 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 27 18:01:28 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 27 18:01:28 localhost systemd[1]: Starting Create System Users...
Jan 27 18:01:28 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 18:01:28 localhost systemd[1]: Mounted FUSE Control File System.
Jan 27 18:01:28 localhost systemd-journald[680]: Received client request to flush runtime journal.
Jan 27 18:01:28 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 27 18:01:28 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 27 18:01:28 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 18:01:28 localhost systemd[1]: Finished Create System Users.
Jan 27 18:01:28 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 18:01:28 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 18:01:28 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 18:01:28 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 27 18:01:28 localhost systemd[1]: Reached target Local File Systems.
Jan 27 18:01:28 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 27 18:01:28 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 27 18:01:28 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 27 18:01:28 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 27 18:01:28 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 27 18:01:28 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 27 18:01:28 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 18:01:28 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Jan 27 18:01:28 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 27 18:01:28 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 18:01:28 localhost systemd[1]: Starting Security Auditing Service...
Jan 27 18:01:28 localhost systemd[1]: Starting RPC Bind...
Jan 27 18:01:28 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 27 18:01:28 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 27 18:01:28 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 27 18:01:28 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 27 18:01:28 localhost systemd[1]: Started RPC Bind.
Jan 27 18:01:28 localhost augenrules[708]: /sbin/augenrules: No change
Jan 27 18:01:28 localhost augenrules[723]: No rules
Jan 27 18:01:28 localhost augenrules[723]: enabled 1
Jan 27 18:01:28 localhost augenrules[723]: failure 1
Jan 27 18:01:28 localhost augenrules[723]: pid 703
Jan 27 18:01:28 localhost augenrules[723]: rate_limit 0
Jan 27 18:01:28 localhost augenrules[723]: backlog_limit 8192
Jan 27 18:01:28 localhost augenrules[723]: lost 0
Jan 27 18:01:28 localhost augenrules[723]: backlog 0
Jan 27 18:01:28 localhost augenrules[723]: backlog_wait_time 60000
Jan 27 18:01:28 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 27 18:01:28 localhost augenrules[723]: enabled 1
Jan 27 18:01:28 localhost augenrules[723]: failure 1
Jan 27 18:01:28 localhost augenrules[723]: pid 703
Jan 27 18:01:28 localhost augenrules[723]: rate_limit 0
Jan 27 18:01:28 localhost augenrules[723]: backlog_limit 8192
Jan 27 18:01:28 localhost augenrules[723]: lost 0
Jan 27 18:01:28 localhost augenrules[723]: backlog 4
Jan 27 18:01:28 localhost augenrules[723]: backlog_wait_time 60000
Jan 27 18:01:28 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 27 18:01:28 localhost augenrules[723]: enabled 1
Jan 27 18:01:28 localhost augenrules[723]: failure 1
Jan 27 18:01:28 localhost augenrules[723]: pid 703
Jan 27 18:01:28 localhost augenrules[723]: rate_limit 0
Jan 27 18:01:28 localhost augenrules[723]: backlog_limit 8192
Jan 27 18:01:28 localhost augenrules[723]: lost 0
Jan 27 18:01:28 localhost augenrules[723]: backlog 8
Jan 27 18:01:28 localhost augenrules[723]: backlog_wait_time 60000
Jan 27 18:01:28 localhost augenrules[723]: backlog_wait_time_actual 0
Jan 27 18:01:28 localhost systemd[1]: Started Security Auditing Service.
Jan 27 18:01:28 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 27 18:01:28 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 27 18:01:28 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 27 18:01:28 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 18:01:28 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 27 18:01:28 localhost systemd[1]: Starting Update is Completed...
Jan 27 18:01:28 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 18:01:28 localhost systemd[1]: Finished Update is Completed.
Jan 27 18:01:28 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 18:01:28 localhost systemd[1]: Reached target System Initialization.
Jan 27 18:01:28 localhost systemd[1]: Started dnf makecache --timer.
Jan 27 18:01:28 localhost systemd[1]: Started Daily rotation of log files.
Jan 27 18:01:28 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 27 18:01:28 localhost systemd[1]: Reached target Timer Units.
Jan 27 18:01:28 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 18:01:28 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 27 18:01:28 localhost systemd[1]: Reached target Socket Units.
Jan 27 18:01:28 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 27 18:01:28 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 18:01:28 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 27 18:01:28 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 18:01:28 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 18:01:28 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 18:01:28 localhost systemd-udevd[733]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 18:01:28 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 27 18:01:28 localhost systemd[1]: Reached target Basic System.
Jan 27 18:01:28 localhost dbus-broker-lau[758]: Ready
Jan 27 18:01:29 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 27 18:01:29 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 27 18:01:29 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 27 18:01:29 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 27 18:01:29 localhost systemd[1]: Starting NTP client/server...
Jan 27 18:01:29 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 27 18:01:29 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 27 18:01:29 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 27 18:01:29 localhost systemd[1]: Started irqbalance daemon.
Jan 27 18:01:29 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 27 18:01:29 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 18:01:29 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 18:01:29 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 18:01:29 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 27 18:01:29 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 27 18:01:29 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 27 18:01:29 localhost systemd[1]: Starting User Login Management...
Jan 27 18:01:29 localhost kernel: kvm_amd: TSC scaling supported
Jan 27 18:01:29 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 27 18:01:29 localhost kernel: kvm_amd: Nested Paging enabled
Jan 27 18:01:29 localhost kernel: kvm_amd: LBR virtualization supported
Jan 27 18:01:29 localhost chronyd[803]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 18:01:29 localhost chronyd[803]: Loaded 0 symmetric keys
Jan 27 18:01:29 localhost chronyd[803]: Using right/UTC timezone to obtain leap second data
Jan 27 18:01:29 localhost chronyd[803]: Loaded seccomp filter (level 2)
Jan 27 18:01:29 localhost systemd[1]: Started NTP client/server.
Jan 27 18:01:29 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 27 18:01:29 localhost systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 18:01:29 localhost systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 18:01:29 localhost systemd-logind[795]: New seat seat0.
Jan 27 18:01:29 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 27 18:01:29 localhost systemd[1]: Started User Login Management.
Jan 27 18:01:29 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 27 18:01:29 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 27 18:01:29 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 27 18:01:29 localhost kernel: Console: switching to colour dummy device 80x25
Jan 27 18:01:29 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 27 18:01:29 localhost kernel: [drm] features: -context_init
Jan 27 18:01:29 localhost kernel: [drm] number of scanouts: 1
Jan 27 18:01:29 localhost kernel: [drm] number of cap sets: 0
Jan 27 18:01:29 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 27 18:01:29 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 27 18:01:29 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 27 18:01:29 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 27 18:01:29 localhost iptables.init[787]: iptables: Applying firewall rules: [  OK  ]
Jan 27 18:01:29 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 27 18:01:29 localhost cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 27 Jan 2026 18:01:29 +0000. Up 6.56 seconds.
Jan 27 18:01:30 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 27 18:01:30 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 27 18:01:30 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp451fk1n2.mount: Deactivated successfully.
Jan 27 18:01:30 localhost systemd[1]: Starting Hostname Service...
Jan 27 18:01:30 localhost systemd[1]: Started Hostname Service.
Jan 27 18:01:30 np0005597875.novalocal systemd-hostnamed[854]: Hostname set to <np0005597875.novalocal> (static)
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Reached target Preparation for Network.
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Starting Network Manager...
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5039] NetworkManager (version 1.54.3-2.el9) is starting... (boot:89d8d250-28a2-43e9-80a8-3ccb353a2463)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5045] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5214] manager[0x560d08f0b000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5261] hostname: hostname: using hostnamed
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5262] hostname: static hostname changed from (none) to "np0005597875.novalocal"
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5265] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5388] manager[0x560d08f0b000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5389] manager[0x560d08f0b000]: rfkill: WWAN hardware radio set enabled
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5501] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5502] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5503] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5504] manager: Networking is enabled by state file
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5507] settings: Loaded settings plugin: keyfile (internal)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5539] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5593] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5622] dhcp: init: Using DHCP client 'internal'
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5628] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5657] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5693] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5731] device (lo): Activation: starting connection 'lo' (62ecf3fa-b7e2-49f7-a1e5-4df78c409860)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5748] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5757] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Started Network Manager.
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5804] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Reached target Network.
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5818] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5827] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5836] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5842] device (eth0): carrier: link connected
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5853] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5868] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5887] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5898] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5902] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5913] manager: NetworkManager state is now CONNECTING
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5918] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5941] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5949] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5983] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.5991] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 18:01:30 np0005597875.novalocal NetworkManager[858]: <info>  [1769536890.6008] device (lo): Activation: successful, device activated.
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Reached target NFS client services.
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: Reached target Remote File Systems.
Jan 27 18:01:30 np0005597875.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.0939] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.0952] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.0973] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.1005] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.1007] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.1011] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.1014] device (eth0): Activation: successful, device activated.
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.1018] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 18:01:31 np0005597875.novalocal NetworkManager[858]: <info>  [1769536891.1021] manager: startup complete
Jan 27 18:01:31 np0005597875.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 27 18:01:31 np0005597875.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 27 Jan 2026 18:01:31 +0000. Up 8.00 seconds.
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.238         | 255.255.255.0 | global | fa:16:3e:ea:4a:df |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:feea:4adf/64 |       .       |  link  | fa:16:3e:ea:4a:df |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 27 18:01:31 np0005597875.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 18:01:32 np0005597875.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Jan 27 18:01:32 np0005597875.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 27 18:01:32 np0005597875.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Jan 27 18:01:32 np0005597875.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Jan 27 18:01:32 np0005597875.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Jan 27 18:01:32 np0005597875.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Generating public/private rsa key pair.
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: The key fingerprint is:
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: SHA256:OmdHZQSdm1s9pSD32aa4+0OWKZjlHXbMAeQIkjMqZ5M root@np0005597875.novalocal
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: The key's randomart image is:
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: +---[RSA 3072]----+
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |       ....oo+.  |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |       +. o.B  ..|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |      o o  +o* Bo|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |   . E     o+ B.B|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |    + . S .= * B.|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |       . .o = B  |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |      o o .  =   |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |       + .  . .  |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |            .o.. |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: The key fingerprint is:
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: SHA256:BMiKNYLTlwogsCySV9skb8IPrIUHex9CDL1wzqNAVts root@np0005597875.novalocal
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: The key's randomart image is:
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: +---[ECDSA 256]---+
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |B.+*+oo          |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |B==O@* .         |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |BB+@@E= .        |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |=.o==B o         |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: | ... .o S        |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |  .              |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |                 |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |                 |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |                 |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: The key fingerprint is:
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: SHA256:WRT1l+upq2tqkWZyFCHE22C+Jtj7FUqv2Xe0ematRc8 root@np0005597875.novalocal
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: The key's randomart image is:
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: +--[ED25519 256]--+
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |      oo .+o.    |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |       +.o   .  .|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |      o + o   ...|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |       o =     ..|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |    o  .S..    o |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |   . o.+o*.  .o.o|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |      +.=o. . o+E|
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |     .  =. o *o. |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: |      .+.o++O+o  |
Jan 27 18:01:32 np0005597875.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Reached target Network is Online.
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Starting System Logging Service...
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 27 18:01:32 np0005597875.novalocal sm-notify[1005]: Version 2.5.4 starting
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Starting Permit User Sessions...
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Finished Permit User Sessions.
Jan 27 18:01:32 np0005597875.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 27 18:01:32 np0005597875.novalocal sshd[1007]: Server listening on :: port 22.
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Started Command Scheduler.
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Started Getty on tty1.
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Reached target Login Prompts.
Jan 27 18:01:32 np0005597875.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 27 18:01:32 np0005597875.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 27 18:01:32 np0005597875.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 27 18:01:32 np0005597875.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 33% if used.)
Jan 27 18:01:32 np0005597875.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 27 18:01:33 np0005597875.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 27 18:01:33 np0005597875.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 27 18:01:33 np0005597875.novalocal systemd[1]: Started System Logging Service.
Jan 27 18:01:33 np0005597875.novalocal systemd[1]: Reached target Multi-User System.
Jan 27 18:01:33 np0005597875.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 27 18:01:33 np0005597875.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 27 18:01:33 np0005597875.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 27 18:01:33 np0005597875.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 18:01:33 np0005597875.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Jan 27 18:01:33 np0005597875.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1123]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 27 Jan 2026 18:01:33 +0000. Up 9.86 seconds.
Jan 27 18:01:33 np0005597875.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 27 18:01:33 np0005597875.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 27 18:01:33 np0005597875.novalocal dracut[1266]: dracut-057-102.git20250818.el9
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1286]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 27 Jan 2026 18:01:33 +0000. Up 10.36 seconds.
Jan 27 18:01:33 np0005597875.novalocal dracut[1268]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1316]: #############################################################
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1317]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1326]: 256 SHA256:BMiKNYLTlwogsCySV9skb8IPrIUHex9CDL1wzqNAVts root@np0005597875.novalocal (ECDSA)
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1334]: 256 SHA256:WRT1l+upq2tqkWZyFCHE22C+Jtj7FUqv2Xe0ematRc8 root@np0005597875.novalocal (ED25519)
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1338]: 3072 SHA256:OmdHZQSdm1s9pSD32aa4+0OWKZjlHXbMAeQIkjMqZ5M root@np0005597875.novalocal (RSA)
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1342]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1343]: #############################################################
Jan 27 18:01:33 np0005597875.novalocal cloud-init[1286]: Cloud-init v. 24.4-8.el9 finished at Tue, 27 Jan 2026 18:01:33 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.55 seconds
Jan 27 18:01:34 np0005597875.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 27 18:01:34 np0005597875.novalocal systemd[1]: Reached target Cloud-init target.
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 27 18:01:34 np0005597875.novalocal dracut[1268]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 27 18:01:34 np0005597875.novalocal sshd-session[1641]: Connection reset by 38.102.83.114 port 54970 [preauth]
Jan 27 18:01:34 np0005597875.novalocal sshd-session[1653]: Unable to negotiate with 38.102.83.114 port 54984: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 27 18:01:34 np0005597875.novalocal sshd-session[1677]: Unable to negotiate with 38.102.83.114 port 55006: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 27 18:01:34 np0005597875.novalocal sshd-session[1688]: Unable to negotiate with 38.102.83.114 port 55022: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 27 18:01:35 np0005597875.novalocal sshd-session[1662]: Connection closed by 38.102.83.114 port 54990 [preauth]
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: memstrack is not available
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 18:01:35 np0005597875.novalocal sshd-session[1741]: Unable to negotiate with 38.102.83.114 port 55046: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 18:01:35 np0005597875.novalocal sshd-session[1749]: Unable to negotiate with 38.102.83.114 port 55048: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 18:01:35 np0005597875.novalocal sshd-session[1697]: Connection closed by 38.102.83.114 port 55032 [preauth]
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 18:01:35 np0005597875.novalocal sshd-session[1729]: Connection closed by 38.102.83.114 port 55036 [preauth]
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: memstrack is not available
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: *** Including module: systemd ***
Jan 27 18:01:35 np0005597875.novalocal dracut[1268]: *** Including module: fips ***
Jan 27 18:01:35 np0005597875.novalocal chronyd[803]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 27 18:01:36 np0005597875.novalocal chronyd[803]: System clock TAI offset set to 37 seconds
Jan 27 18:01:36 np0005597875.novalocal dracut[1268]: *** Including module: systemd-initrd ***
Jan 27 18:01:36 np0005597875.novalocal dracut[1268]: *** Including module: i18n ***
Jan 27 18:01:36 np0005597875.novalocal dracut[1268]: *** Including module: drm ***
Jan 27 18:01:36 np0005597875.novalocal dracut[1268]: *** Including module: prefixdevname ***
Jan 27 18:01:36 np0005597875.novalocal dracut[1268]: *** Including module: kernel-modules ***
Jan 27 18:01:36 np0005597875.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]: *** Including module: kernel-modules-extra ***
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]: *** Including module: qemu ***
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]: *** Including module: fstab-sys ***
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]: *** Including module: rootfs-block ***
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]: *** Including module: terminfo ***
Jan 27 18:01:37 np0005597875.novalocal dracut[1268]: *** Including module: udev-rules ***
Jan 27 18:01:38 np0005597875.novalocal dracut[1268]: Skipping udev rule: 91-permissions.rules
Jan 27 18:01:38 np0005597875.novalocal dracut[1268]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 27 18:01:38 np0005597875.novalocal dracut[1268]: *** Including module: virtiofs ***
Jan 27 18:01:38 np0005597875.novalocal dracut[1268]: *** Including module: dracut-systemd ***
Jan 27 18:01:38 np0005597875.novalocal dracut[1268]: *** Including module: usrmount ***
Jan 27 18:01:38 np0005597875.novalocal dracut[1268]: *** Including module: base ***
Jan 27 18:01:38 np0005597875.novalocal dracut[1268]: *** Including module: fs-lib ***
Jan 27 18:01:38 np0005597875.novalocal dracut[1268]: *** Including module: kdumpbase ***
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:   microcode_ctl module: mangling fw_dir
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: IRQ 25 affinity is now unmanaged
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: IRQ 31 affinity is now unmanaged
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: IRQ 28 affinity is now unmanaged
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: IRQ 32 affinity is now unmanaged
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: IRQ 30 affinity is now unmanaged
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 27 18:01:39 np0005597875.novalocal irqbalance[791]: IRQ 29 affinity is now unmanaged
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]: *** Including module: openssl ***
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]: *** Including module: shutdown ***
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]: *** Including module: squash ***
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]: *** Including modules done ***
Jan 27 18:01:39 np0005597875.novalocal dracut[1268]: *** Installing kernel module dependencies ***
Jan 27 18:01:40 np0005597875.novalocal dracut[1268]: *** Installing kernel module dependencies done ***
Jan 27 18:01:40 np0005597875.novalocal dracut[1268]: *** Resolving executable dependencies ***
Jan 27 18:01:41 np0005597875.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 18:01:42 np0005597875.novalocal dracut[1268]: *** Resolving executable dependencies done ***
Jan 27 18:01:42 np0005597875.novalocal dracut[1268]: *** Generating early-microcode cpio image ***
Jan 27 18:01:42 np0005597875.novalocal dracut[1268]: *** Store current command line parameters ***
Jan 27 18:01:42 np0005597875.novalocal dracut[1268]: Stored kernel commandline:
Jan 27 18:01:42 np0005597875.novalocal dracut[1268]: No dracut internal kernel commandline stored in the initramfs
Jan 27 18:01:42 np0005597875.novalocal dracut[1268]: *** Install squash loader ***
Jan 27 18:01:43 np0005597875.novalocal dracut[1268]: *** Squashing the files inside the initramfs ***
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: *** Squashing the files inside the initramfs done ***
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: *** Hardlinking files ***
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: Mode:           real
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: Files:          50
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: Linked:         0 files
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: Compared:       0 xattrs
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: Compared:       0 files
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: Saved:          0 B
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: Duration:       0.000567 seconds
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: *** Hardlinking files done ***
Jan 27 18:01:45 np0005597875.novalocal dracut[1268]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 27 18:01:46 np0005597875.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Jan 27 18:01:46 np0005597875.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Jan 27 18:01:46 np0005597875.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 27 18:01:46 np0005597875.novalocal systemd[1]: Startup finished in 1.540s (kernel) + 2.477s (initrd) + 19.137s (userspace) = 23.155s.
Jan 27 18:02:00 np0005597875.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 18:02:05 np0005597875.novalocal sshd-session[4304]: Accepted publickey for zuul from 38.102.83.114 port 59682 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 27 18:02:05 np0005597875.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 27 18:02:05 np0005597875.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 27 18:02:05 np0005597875.novalocal systemd-logind[795]: New session 1 of user zuul.
Jan 27 18:02:06 np0005597875.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 27 18:02:06 np0005597875.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Queued start job for default target Main User Target.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Created slice User Application Slice.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Reached target Paths.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Reached target Timers.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Starting D-Bus User Message Bus Socket...
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Starting Create User's Volatile Files and Directories...
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Listening on D-Bus User Message Bus Socket.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Reached target Sockets.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Finished Create User's Volatile Files and Directories.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Reached target Basic System.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Reached target Main User Target.
Jan 27 18:02:06 np0005597875.novalocal systemd[4308]: Startup finished in 127ms.
Jan 27 18:02:06 np0005597875.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 27 18:02:06 np0005597875.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 27 18:02:06 np0005597875.novalocal sshd-session[4304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:02:06 np0005597875.novalocal python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:02:09 np0005597875.novalocal python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:02:15 np0005597875.novalocal python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:02:15 np0005597875.novalocal python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 27 18:02:17 np0005597875.novalocal python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq7Z/kjFTTVaxpI9pGTchsSuve2bKy+VxwtpYQPNS2S7MtJa9+JP6CZtQyXcKzI84PfD82/g36U142EJD8+eqhIREEMfECW1v4qarujs2sqdB7jMyGR1KF0MRrDmqMCJ8jhaVHAXC5Q/o7jqm5yZ72ccT78qCVn/pLJN2FTqv0/jXZX374zTNdQ1KZB+Ob/nDbPW55gj+7JBedvMxAioG11PVjwCgBeoFUqoSrmFqabDlF4yyn7sSfuZft6YMXkkOQ6J9WLl5NuzuXIyuOLXgiYa1/3T1AI9mtEPxl053CuJmFLoi3f1v3kZVb48kGclhqR27ftaIBnx8fNYdGrXlm+VRWK0m1hQVrOoksWSWkYZLeFd+tSiGuqJhOhFTH3/U1sDRHBhRKkhaPkbNQR7tiEqF0tMUm515K++XiwqTapu96FGqOA4f5LIur2wZIGyF2YPna5/ltrUMGN2q6SxPc03uhoHaOywytJTTRv41Gxm+PSeoxHaeFbnG78JVm/qs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:18 np0005597875.novalocal python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:19 np0005597875.novalocal python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:02:19 np0005597875.novalocal python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769536938.6804242-207-248024175108547/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=c7ec03e497814d0b9e100be134c07a26_id_rsa follow=False checksum=537ddf24ed57b13416654f7f444e52daba2c5728 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:19 np0005597875.novalocal python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:02:20 np0005597875.novalocal python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769536939.5781462-240-96748308185323/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=c7ec03e497814d0b9e100be134c07a26_id_rsa.pub follow=False checksum=3c20e95effe942b6df01d3923681ee0c91dc15c7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:21 np0005597875.novalocal python3[4979]: ansible-ping Invoked with data=pong
Jan 27 18:02:22 np0005597875.novalocal python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:02:24 np0005597875.novalocal python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 27 18:02:24 np0005597875.novalocal python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:25 np0005597875.novalocal python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:25 np0005597875.novalocal python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:25 np0005597875.novalocal python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:26 np0005597875.novalocal python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:26 np0005597875.novalocal python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:27 np0005597875.novalocal sudo[5237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbrqixbnzjiwfwngshmbvezsesjowhxg ; /usr/bin/python3'
Jan 27 18:02:27 np0005597875.novalocal sudo[5237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:28 np0005597875.novalocal python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:28 np0005597875.novalocal sudo[5237]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:28 np0005597875.novalocal sudo[5315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueowrkxytmtrrbbfykresghbsuftqcmc ; /usr/bin/python3'
Jan 27 18:02:28 np0005597875.novalocal sudo[5315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:28 np0005597875.novalocal python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:02:28 np0005597875.novalocal sudo[5315]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:28 np0005597875.novalocal sudo[5388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cobnhzhbtcmmocthmjttowfjbdklxhau ; /usr/bin/python3'
Jan 27 18:02:28 np0005597875.novalocal sudo[5388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:29 np0005597875.novalocal python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769536948.1691852-21-261151957694175/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:29 np0005597875.novalocal sudo[5388]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:29 np0005597875.novalocal python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:30 np0005597875.novalocal python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:30 np0005597875.novalocal python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:30 np0005597875.novalocal python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:30 np0005597875.novalocal python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:31 np0005597875.novalocal python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:31 np0005597875.novalocal python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:31 np0005597875.novalocal python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:32 np0005597875.novalocal python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:32 np0005597875.novalocal python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:32 np0005597875.novalocal python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:32 np0005597875.novalocal python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:33 np0005597875.novalocal python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:33 np0005597875.novalocal python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:33 np0005597875.novalocal python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:34 np0005597875.novalocal python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:34 np0005597875.novalocal python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:34 np0005597875.novalocal python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:34 np0005597875.novalocal python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:35 np0005597875.novalocal python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:35 np0005597875.novalocal python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:35 np0005597875.novalocal python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:36 np0005597875.novalocal python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:36 np0005597875.novalocal python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:36 np0005597875.novalocal python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:36 np0005597875.novalocal python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:02:39 np0005597875.novalocal sudo[6062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rizynocyatdmvjliapawyjlwtwzwqheb ; /usr/bin/python3'
Jan 27 18:02:39 np0005597875.novalocal sudo[6062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:39 np0005597875.novalocal python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 18:02:39 np0005597875.novalocal systemd[1]: Starting Time & Date Service...
Jan 27 18:02:39 np0005597875.novalocal systemd[1]: Started Time & Date Service.
Jan 27 18:02:39 np0005597875.novalocal systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 27 18:02:39 np0005597875.novalocal sudo[6062]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:39 np0005597875.novalocal sudo[6093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poizsxssplvywbewzdptobmwfhakqmlh ; /usr/bin/python3'
Jan 27 18:02:39 np0005597875.novalocal sudo[6093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:40 np0005597875.novalocal python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:40 np0005597875.novalocal sudo[6093]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:40 np0005597875.novalocal python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:02:40 np0005597875.novalocal python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769536960.220642-153-159976578330618/source _original_basename=tmpxkvuat5g follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:41 np0005597875.novalocal python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:02:41 np0005597875.novalocal chronyd[803]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Jan 27 18:02:41 np0005597875.novalocal python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769536961.106444-183-258622259272206/source _original_basename=tmpuf61vuql follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:42 np0005597875.novalocal sudo[6513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqhgzaetedpawttnzftjmnjiignferkp ; /usr/bin/python3'
Jan 27 18:02:42 np0005597875.novalocal sudo[6513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:42 np0005597875.novalocal python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:02:42 np0005597875.novalocal sudo[6513]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:42 np0005597875.novalocal sudo[6586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfyndmwyooliysmpeblqisfgoybznfcs ; /usr/bin/python3'
Jan 27 18:02:42 np0005597875.novalocal sudo[6586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:43 np0005597875.novalocal python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769536962.365146-231-267620665280951/source _original_basename=tmpwibnf9uf follow=False checksum=315d925a1c7d27b381f3cae1546bdf6d57bfb104 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:43 np0005597875.novalocal sudo[6586]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:43 np0005597875.novalocal python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:02:43 np0005597875.novalocal python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:02:44 np0005597875.novalocal sudo[6740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhpsrbxohphfnkbpbcannrstwjlzyahx ; /usr/bin/python3'
Jan 27 18:02:44 np0005597875.novalocal sudo[6740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:44 np0005597875.novalocal python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:02:44 np0005597875.novalocal sudo[6740]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:44 np0005597875.novalocal sudo[6813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppsjurymbkldoiwlsxtcdugpmbkkfqgc ; /usr/bin/python3'
Jan 27 18:02:44 np0005597875.novalocal sudo[6813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:44 np0005597875.novalocal python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769536964.1976473-273-173589535264174/source _original_basename=tmpb73q6t60 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:02:44 np0005597875.novalocal sudo[6813]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:45 np0005597875.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjtvruroanvawnrcawasdsocxqlvqyu ; /usr/bin/python3'
Jan 27 18:02:45 np0005597875.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:02:45 np0005597875.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-01a2-a1c2-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:02:45 np0005597875.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Jan 27 18:02:46 np0005597875.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-01a2-a1c2-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 27 18:02:47 np0005597875.novalocal python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:03:05 np0005597875.novalocal sudo[6946]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abpgdzmbpngeoccjafnbemzflygvltjo ; /usr/bin/python3'
Jan 27 18:03:05 np0005597875.novalocal sudo[6946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:03:05 np0005597875.novalocal python3[6948]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:03:05 np0005597875.novalocal sudo[6946]: pam_unix(sudo:session): session closed for user root
Jan 27 18:03:09 np0005597875.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 27 18:03:40 np0005597875.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 27 18:03:40 np0005597875.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.0994] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 18:03:40 np0005597875.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1265] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1297] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1302] device (eth1): carrier: link connected
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1305] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1312] policy: auto-activating connection 'Wired connection 1' (4d7ab4fa-c1ce-3c57-a6d4-71cc23f6c63c)
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1318] device (eth1): Activation: starting connection 'Wired connection 1' (4d7ab4fa-c1ce-3c57-a6d4-71cc23f6c63c)
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1320] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1325] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1330] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:03:40 np0005597875.novalocal NetworkManager[858]: <info>  [1769537020.1336] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:03:41 np0005597875.novalocal python3[6978]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-1481-4d28-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:03:48 np0005597875.novalocal sudo[7056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tywzwivgrjmxwisqllwwdlxatakuygmv ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 18:03:48 np0005597875.novalocal sudo[7056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:03:48 np0005597875.novalocal python3[7058]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:03:48 np0005597875.novalocal sudo[7056]: pam_unix(sudo:session): session closed for user root
Jan 27 18:03:48 np0005597875.novalocal sudo[7129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzmimwagmipmnmukbayqngejtyehldwm ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 18:03:48 np0005597875.novalocal sudo[7129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:03:48 np0005597875.novalocal python3[7131]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769537027.9419928-102-148462078325220/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2366182e19ea3713d431e034ceca67ea9e108633 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:03:48 np0005597875.novalocal sudo[7129]: pam_unix(sudo:session): session closed for user root
Jan 27 18:03:49 np0005597875.novalocal sudo[7179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujptsdrfnmvmjifonwkqvrkocqbtnjld ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 18:03:49 np0005597875.novalocal sudo[7179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:03:49 np0005597875.novalocal python3[7181]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[858]: <info>  [1769537029.5211] caught SIGTERM, shutting down normally.
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Stopping Network Manager...
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[858]: <info>  [1769537029.5221] dhcp4 (eth0): canceled DHCP transaction
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[858]: <info>  [1769537029.5221] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[858]: <info>  [1769537029.5221] dhcp4 (eth0): state changed no lease
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[858]: <info>  [1769537029.5223] manager: NetworkManager state is now CONNECTING
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[858]: <info>  [1769537029.5307] dhcp4 (eth1): canceled DHCP transaction
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[858]: <info>  [1769537029.5308] dhcp4 (eth1): state changed no lease
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[858]: <info>  [1769537029.5352] exiting (success)
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Stopped Network Manager.
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: NetworkManager.service: Consumed 1.189s CPU time, 9.7M memory peak.
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Starting Network Manager...
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.5889] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:89d8d250-28a2-43e9-80a8-3ccb353a2463)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.5893] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.5971] manager[0x562938877000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Starting Hostname Service...
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Started Hostname Service.
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7036] hostname: hostname: using hostnamed
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7039] hostname: static hostname changed from (none) to "np0005597875.novalocal"
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7044] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7049] manager[0x562938877000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7050] manager[0x562938877000]: rfkill: WWAN hardware radio set enabled
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7096] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7097] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7098] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7099] manager: Networking is enabled by state file
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7104] settings: Loaded settings plugin: keyfile (internal)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7111] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7151] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7167] dhcp: init: Using DHCP client 'internal'
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7172] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7179] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7187] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7199] device (lo): Activation: starting connection 'lo' (62ecf3fa-b7e2-49f7-a1e5-4df78c409860)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7209] device (eth0): carrier: link connected
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7216] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7224] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7224] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7237] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7248] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7260] device (eth1): carrier: link connected
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7267] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7278] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4d7ab4fa-c1ce-3c57-a6d4-71cc23f6c63c) (indicated)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7279] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7288] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7302] device (eth1): Activation: starting connection 'Wired connection 1' (4d7ab4fa-c1ce-3c57-a6d4-71cc23f6c63c)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7312] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Started Network Manager.
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7317] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7320] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7322] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7324] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7327] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7330] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7333] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7336] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7342] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7345] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7352] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7354] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7378] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7380] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7382] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7388] device (lo): Activation: successful, device activated.
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7399] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7459] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7487] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7489] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7494] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7496] device (eth0): Activation: successful, device activated.
Jan 27 18:03:49 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537029.7501] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 18:03:49 np0005597875.novalocal sudo[7179]: pam_unix(sudo:session): session closed for user root
Jan 27 18:03:50 np0005597875.novalocal python3[7265]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-1481-4d28-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:03:59 np0005597875.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 18:04:19 np0005597875.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.3723] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 18:04:35 np0005597875.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 18:04:35 np0005597875.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4135] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4140] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4150] device (eth1): Activation: successful, device activated.
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4160] manager: startup complete
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4163] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <warn>  [1769537075.4171] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4185] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 27 18:04:35 np0005597875.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4367] dhcp4 (eth1): canceled DHCP transaction
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4368] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4368] dhcp4 (eth1): state changed no lease
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4395] policy: auto-activating connection 'ci-private-network' (b3f7d2dc-0c1f-500c-bf63-a687d2e42193)
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4402] device (eth1): Activation: starting connection 'ci-private-network' (b3f7d2dc-0c1f-500c-bf63-a687d2e42193)
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4403] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4408] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4420] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4436] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4522] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4525] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:04:35 np0005597875.novalocal NetworkManager[7188]: <info>  [1769537075.4536] device (eth1): Activation: successful, device activated.
Jan 27 18:04:36 np0005597875.novalocal systemd[4308]: Starting Mark boot as successful...
Jan 27 18:04:36 np0005597875.novalocal systemd[4308]: Finished Mark boot as successful.
Jan 27 18:04:45 np0005597875.novalocal sudo[7369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojccvlahnsauonvjvuvxhhaybahtbsjr ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 18:04:45 np0005597875.novalocal sudo[7369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:04:45 np0005597875.novalocal python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:04:45 np0005597875.novalocal sudo[7369]: pam_unix(sudo:session): session closed for user root
Jan 27 18:04:45 np0005597875.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 18:04:45 np0005597875.novalocal sudo[7442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yviicbvnhamhkhstozdqvznbtxsrroua ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 18:04:45 np0005597875.novalocal sudo[7442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:04:45 np0005597875.novalocal python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769537085.103548-259-54713117454933/source _original_basename=tmpop03inmc follow=False checksum=b507e3886fc5aecb729681dfb8933d2e812d0c08 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:04:45 np0005597875.novalocal sudo[7442]: pam_unix(sudo:session): session closed for user root
Jan 27 18:05:46 np0005597875.novalocal sshd-session[4318]: Received disconnect from 38.102.83.114 port 59682:11: disconnected by user
Jan 27 18:05:46 np0005597875.novalocal sshd-session[4318]: Disconnected from user zuul 38.102.83.114 port 59682
Jan 27 18:05:46 np0005597875.novalocal sshd-session[4304]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:05:46 np0005597875.novalocal systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Jan 27 18:05:55 np0005597875.novalocal chronyd[803]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 27 18:06:46 np0005597875.novalocal sshd-session[7469]: Received disconnect from 45.227.254.170 port 51662:11:  [preauth]
Jan 27 18:06:46 np0005597875.novalocal sshd-session[7469]: Disconnected from authenticating user root 45.227.254.170 port 51662 [preauth]
Jan 27 18:07:36 np0005597875.novalocal systemd[4308]: Created slice User Background Tasks Slice.
Jan 27 18:07:36 np0005597875.novalocal systemd[4308]: Starting Cleanup of User's Temporary Files and Directories...
Jan 27 18:07:36 np0005597875.novalocal systemd[4308]: Finished Cleanup of User's Temporary Files and Directories.
Jan 27 18:12:02 np0005597875.novalocal sshd-session[7476]: Accepted publickey for zuul from 38.102.83.114 port 45866 ssh2: RSA SHA256:jhFpR9mdpMGvU2F0q/HJAkqGxozs6TWh9oCwMxPPlpE
Jan 27 18:12:02 np0005597875.novalocal systemd-logind[795]: New session 3 of user zuul.
Jan 27 18:12:02 np0005597875.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 27 18:12:02 np0005597875.novalocal sshd-session[7476]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:12:02 np0005597875.novalocal sudo[7503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jajlthnatatofyemrkzolxyhkbjhjtiz ; /usr/bin/python3'
Jan 27 18:12:02 np0005597875.novalocal sudo[7503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:02 np0005597875.novalocal python3[7505]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-ecbc-7b1b-000000002183-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:12:02 np0005597875.novalocal sudo[7503]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:03 np0005597875.novalocal sudo[7532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpgmqfpzawqlmhpbwvvezandpmfagssn ; /usr/bin/python3'
Jan 27 18:12:03 np0005597875.novalocal sudo[7532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:03 np0005597875.novalocal python3[7534]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:12:03 np0005597875.novalocal sudo[7532]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:03 np0005597875.novalocal sudo[7558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niavfadqxkzpdirjvjzipsoarlgqzzio ; /usr/bin/python3'
Jan 27 18:12:03 np0005597875.novalocal sudo[7558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:03 np0005597875.novalocal python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:12:03 np0005597875.novalocal sudo[7558]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:03 np0005597875.novalocal sudo[7584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdkxlshdktyvjgorlvlzscajymqydzzs ; /usr/bin/python3'
Jan 27 18:12:03 np0005597875.novalocal sudo[7584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:03 np0005597875.novalocal python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:12:03 np0005597875.novalocal sudo[7584]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:03 np0005597875.novalocal sudo[7610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piqnmkhcanrjlbqppctsawqcuvvczcih ; /usr/bin/python3'
Jan 27 18:12:03 np0005597875.novalocal sudo[7610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:03 np0005597875.novalocal python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:12:03 np0005597875.novalocal sudo[7610]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:04 np0005597875.novalocal sudo[7636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfwapmforzsxqstaozlorxrdembelsit ; /usr/bin/python3'
Jan 27 18:12:04 np0005597875.novalocal sudo[7636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:04 np0005597875.novalocal python3[7638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:12:04 np0005597875.novalocal sudo[7636]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:04 np0005597875.novalocal sudo[7714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwthisbqnzgaldokxmygzxulkzofwwkt ; /usr/bin/python3'
Jan 27 18:12:04 np0005597875.novalocal sudo[7714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:05 np0005597875.novalocal python3[7716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:12:05 np0005597875.novalocal sudo[7714]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:05 np0005597875.novalocal sudo[7787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrepmpmvrvvgwjsfgfkczwyyweaxyakr ; /usr/bin/python3'
Jan 27 18:12:05 np0005597875.novalocal sudo[7787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:05 np0005597875.novalocal python3[7789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769537524.8689404-512-64583743268812/source _original_basename=tmpxfgni08o follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:12:05 np0005597875.novalocal sudo[7787]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:06 np0005597875.novalocal sudo[7837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdilravzoxxrzlpejshobzufrhczdjo ; /usr/bin/python3'
Jan 27 18:12:06 np0005597875.novalocal sudo[7837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:06 np0005597875.novalocal python3[7839]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:12:06 np0005597875.novalocal systemd[1]: Reloading.
Jan 27 18:12:06 np0005597875.novalocal systemd-rc-local-generator[7857]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:12:06 np0005597875.novalocal sudo[7837]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:08 np0005597875.novalocal sudo[7893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xthwbkkevporpsxxekxqtnrkbqlbzknp ; /usr/bin/python3'
Jan 27 18:12:08 np0005597875.novalocal sudo[7893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:08 np0005597875.novalocal python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 27 18:12:08 np0005597875.novalocal sudo[7893]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:08 np0005597875.novalocal sudo[7919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixorurpynsgodsjniseaxinwqvhqsnfb ; /usr/bin/python3'
Jan 27 18:12:08 np0005597875.novalocal sudo[7919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:08 np0005597875.novalocal python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:12:08 np0005597875.novalocal sudo[7919]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:08 np0005597875.novalocal sudo[7947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpruiwxbagmfyegkcaufcgsuadcopwdf ; /usr/bin/python3'
Jan 27 18:12:08 np0005597875.novalocal sudo[7947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:08 np0005597875.novalocal python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:12:08 np0005597875.novalocal sudo[7947]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:08 np0005597875.novalocal sudo[7975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jznizhneeysxgfuvgvtmketiynbrgrhn ; /usr/bin/python3'
Jan 27 18:12:08 np0005597875.novalocal sudo[7975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:09 np0005597875.novalocal python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:12:09 np0005597875.novalocal sudo[7975]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:09 np0005597875.novalocal sudo[8003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvdkapybxpgzkiusgcilclrulvlfyfj ; /usr/bin/python3'
Jan 27 18:12:09 np0005597875.novalocal sudo[8003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:09 np0005597875.novalocal python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:12:09 np0005597875.novalocal sudo[8003]: pam_unix(sudo:session): session closed for user root
Jan 27 18:12:09 np0005597875.novalocal python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-ecbc-7b1b-00000000218a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:12:11 np0005597875.novalocal python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 18:12:12 np0005597875.novalocal sshd-session[7479]: Connection closed by 38.102.83.114 port 45866
Jan 27 18:12:12 np0005597875.novalocal sshd-session[7476]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:12:12 np0005597875.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 27 18:12:12 np0005597875.novalocal systemd[1]: session-3.scope: Consumed 3.977s CPU time.
Jan 27 18:12:12 np0005597875.novalocal systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Jan 27 18:12:12 np0005597875.novalocal systemd-logind[795]: Removed session 3.
Jan 27 18:12:15 np0005597875.novalocal sshd-session[8067]: Accepted publickey for zuul from 38.102.83.114 port 46068 ssh2: RSA SHA256:jhFpR9mdpMGvU2F0q/HJAkqGxozs6TWh9oCwMxPPlpE
Jan 27 18:12:15 np0005597875.novalocal systemd-logind[795]: New session 4 of user zuul.
Jan 27 18:12:15 np0005597875.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 27 18:12:15 np0005597875.novalocal sshd-session[8067]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:12:15 np0005597875.novalocal sudo[8094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnyguwrzveunefxhsnwiktyjvfiuetw ; /usr/bin/python3'
Jan 27 18:12:15 np0005597875.novalocal sudo[8094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:12:15 np0005597875.novalocal python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 18:12:22 np0005597875.novalocal setsebool[8139]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 27 18:12:22 np0005597875.novalocal setsebool[8139]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 27 18:12:36 np0005597875.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 27 18:12:36 np0005597875.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:12:36 np0005597875.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 27 18:12:36 np0005597875.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:12:36 np0005597875.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:12:36 np0005597875.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:12:36 np0005597875.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:12:36 np0005597875.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:12:44 np0005597875.novalocal sshd-session[8168]: Connection closed by 45.148.10.240 port 39810
Jan 27 18:12:48 np0005597875.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 27 18:12:48 np0005597875.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:12:48 np0005597875.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 27 18:12:48 np0005597875.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:12:48 np0005597875.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:12:48 np0005597875.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:12:48 np0005597875.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:12:48 np0005597875.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:13:07 np0005597875.novalocal dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 18:13:07 np0005597875.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:13:07 np0005597875.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:13:07 np0005597875.novalocal systemd[1]: Reloading.
Jan 27 18:13:07 np0005597875.novalocal systemd-rc-local-generator[8912]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:13:07 np0005597875.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:13:08 np0005597875.novalocal sudo[8094]: pam_unix(sudo:session): session closed for user root
Jan 27 18:13:09 np0005597875.novalocal python3[10370]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-d8aa-6b29-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:13:09 np0005597875.novalocal kernel: evm: overlay not supported
Jan 27 18:13:09 np0005597875.novalocal systemd[4308]: Starting D-Bus User Message Bus...
Jan 27 18:13:09 np0005597875.novalocal dbus-broker-launch[11371]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 27 18:13:09 np0005597875.novalocal dbus-broker-launch[11371]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 27 18:13:09 np0005597875.novalocal systemd[4308]: Started D-Bus User Message Bus.
Jan 27 18:13:09 np0005597875.novalocal dbus-broker-lau[11371]: Ready
Jan 27 18:13:09 np0005597875.novalocal systemd[4308]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 18:13:09 np0005597875.novalocal systemd[4308]: Created slice Slice /user.
Jan 27 18:13:09 np0005597875.novalocal systemd[4308]: podman-11272.scope: unit configures an IP firewall, but not running as root.
Jan 27 18:13:09 np0005597875.novalocal systemd[4308]: (This warning is only shown for the first unit using IP firewalling.)
Jan 27 18:13:09 np0005597875.novalocal systemd[4308]: Started podman-11272.scope.
Jan 27 18:13:10 np0005597875.novalocal systemd[4308]: Started podman-pause-d35fa41a.scope.
Jan 27 18:13:10 np0005597875.novalocal sudo[12049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oloffgoyedixnmoxevcdmqvgqoveqtid ; /usr/bin/python3'
Jan 27 18:13:10 np0005597875.novalocal sudo[12049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:13:10 np0005597875.novalocal python3[12074]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.198:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.198:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:13:10 np0005597875.novalocal python3[12074]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 27 18:13:10 np0005597875.novalocal sudo[12049]: pam_unix(sudo:session): session closed for user root
Jan 27 18:13:11 np0005597875.novalocal sshd-session[8070]: Connection closed by 38.102.83.114 port 46068
Jan 27 18:13:11 np0005597875.novalocal sshd-session[8067]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:13:11 np0005597875.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 27 18:13:11 np0005597875.novalocal systemd[1]: session-4.scope: Consumed 48.373s CPU time.
Jan 27 18:13:11 np0005597875.novalocal systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Jan 27 18:13:11 np0005597875.novalocal systemd-logind[795]: Removed session 4.
Jan 27 18:13:19 np0005597875.novalocal irqbalance[791]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 27 18:13:19 np0005597875.novalocal irqbalance[791]: IRQ 27 affinity is now unmanaged
Jan 27 18:13:31 np0005597875.novalocal sshd-session[20626]: Connection closed by 38.102.83.144 port 50968 [preauth]
Jan 27 18:13:31 np0005597875.novalocal sshd-session[20625]: Connection closed by 38.102.83.144 port 50956 [preauth]
Jan 27 18:13:31 np0005597875.novalocal sshd-session[20632]: Unable to negotiate with 38.102.83.144 port 50972: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 27 18:13:31 np0005597875.novalocal sshd-session[20631]: Unable to negotiate with 38.102.83.144 port 50986: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 27 18:13:31 np0005597875.novalocal sshd-session[20630]: Unable to negotiate with 38.102.83.144 port 50992: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 27 18:13:35 np0005597875.novalocal sshd-session[22254]: Accepted publickey for zuul from 38.102.83.114 port 46400 ssh2: RSA SHA256:jhFpR9mdpMGvU2F0q/HJAkqGxozs6TWh9oCwMxPPlpE
Jan 27 18:13:35 np0005597875.novalocal systemd-logind[795]: New session 5 of user zuul.
Jan 27 18:13:35 np0005597875.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 27 18:13:35 np0005597875.novalocal sshd-session[22254]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:13:36 np0005597875.novalocal python3[22358]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+PxL5QwTZa6ixJO012fkOWzIhlw96Rh7+H2xjxVPK12WknydhB5sdz1yfmip9qCe2VD6PJlkSCsvIDZGdwl08= zuul@np0005597874.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:13:36 np0005597875.novalocal sudo[22536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ranlcacyirewvtvrywclfviftqsvzxec ; /usr/bin/python3'
Jan 27 18:13:36 np0005597875.novalocal sudo[22536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:13:36 np0005597875.novalocal python3[22546]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+PxL5QwTZa6ixJO012fkOWzIhlw96Rh7+H2xjxVPK12WknydhB5sdz1yfmip9qCe2VD6PJlkSCsvIDZGdwl08= zuul@np0005597874.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:13:36 np0005597875.novalocal sudo[22536]: pam_unix(sudo:session): session closed for user root
Jan 27 18:13:37 np0005597875.novalocal sudo[22886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulhabbxmrurdonfvuzfmahjrwnderhgo ; /usr/bin/python3'
Jan 27 18:13:37 np0005597875.novalocal sudo[22886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:13:37 np0005597875.novalocal python3[22894]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005597875.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 27 18:13:37 np0005597875.novalocal useradd[22960]: new group: name=cloud-admin, GID=1002
Jan 27 18:13:37 np0005597875.novalocal useradd[22960]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 27 18:13:37 np0005597875.novalocal sudo[22886]: pam_unix(sudo:session): session closed for user root
Jan 27 18:13:37 np0005597875.novalocal sudo[23055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjbcffdnyxbdbcapcwpdkzypnwnlstzc ; /usr/bin/python3'
Jan 27 18:13:37 np0005597875.novalocal sudo[23055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:13:37 np0005597875.novalocal python3[23065]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+PxL5QwTZa6ixJO012fkOWzIhlw96Rh7+H2xjxVPK12WknydhB5sdz1yfmip9qCe2VD6PJlkSCsvIDZGdwl08= zuul@np0005597874.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 18:13:37 np0005597875.novalocal sudo[23055]: pam_unix(sudo:session): session closed for user root
Jan 27 18:13:38 np0005597875.novalocal sudo[23302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzcihmbdbgsabqdjqklnpoimrirkyckg ; /usr/bin/python3'
Jan 27 18:13:38 np0005597875.novalocal sudo[23302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:13:38 np0005597875.novalocal python3[23311]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:13:38 np0005597875.novalocal sudo[23302]: pam_unix(sudo:session): session closed for user root
Jan 27 18:13:38 np0005597875.novalocal sudo[23546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghvcddhidglawvjuzemjhzzkzasmgxno ; /usr/bin/python3'
Jan 27 18:13:38 np0005597875.novalocal sudo[23546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:13:38 np0005597875.novalocal python3[23552]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769537617.8959289-135-208365130352377/source _original_basename=tmp9ewmz8uk follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:13:38 np0005597875.novalocal sudo[23546]: pam_unix(sudo:session): session closed for user root
Jan 27 18:13:39 np0005597875.novalocal sudo[23900]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdklspcgprycuhnapgxujeqixgqnswwq ; /usr/bin/python3'
Jan 27 18:13:39 np0005597875.novalocal sudo[23900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:13:39 np0005597875.novalocal python3[23906]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 27 18:13:39 np0005597875.novalocal systemd[1]: Starting Hostname Service...
Jan 27 18:13:39 np0005597875.novalocal systemd[1]: Started Hostname Service.
Jan 27 18:13:39 np0005597875.novalocal systemd-hostnamed[24014]: Changed pretty hostname to 'compute-0'
Jan 27 18:13:39 compute-0 systemd-hostnamed[24014]: Hostname set to <compute-0> (static)
Jan 27 18:13:39 compute-0 NetworkManager[7188]: <info>  [1769537619.6799] hostname: static hostname changed from "np0005597875.novalocal" to "compute-0"
Jan 27 18:13:39 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 18:13:39 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 18:13:39 compute-0 sudo[23900]: pam_unix(sudo:session): session closed for user root
Jan 27 18:13:40 compute-0 sshd-session[22302]: Connection closed by 38.102.83.114 port 46400
Jan 27 18:13:40 compute-0 sshd-session[22254]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:13:40 compute-0 systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Jan 27 18:13:40 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Jan 27 18:13:40 compute-0 systemd[1]: session-5.scope: Consumed 2.272s CPU time.
Jan 27 18:13:40 compute-0 systemd-logind[795]: Removed session 5.
Jan 27 18:13:49 compute-0 sshd-session[27186]: Received disconnect from 91.224.92.54 port 53380:11:  [preauth]
Jan 27 18:13:49 compute-0 sshd-session[27186]: Disconnected from authenticating user root 91.224.92.54 port 53380 [preauth]
Jan 27 18:13:49 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 18:13:56 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:13:56 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:13:56 compute-0 systemd[1]: man-db-cache-update.service: Consumed 57.806s CPU time.
Jan 27 18:13:56 compute-0 systemd[1]: run-r48305755963b420695908233d91fa3ce.service: Deactivated successfully.
Jan 27 18:14:09 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 18:15:29 compute-0 sshd-session[29936]: Invalid user sol from 45.148.10.240 port 38782
Jan 27 18:15:29 compute-0 sshd-session[29936]: Connection closed by invalid user sol 45.148.10.240 port 38782 [preauth]
Jan 27 18:16:36 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 27 18:16:36 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 27 18:16:36 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 27 18:16:36 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 27 18:17:49 compute-0 sshd-session[29941]: Invalid user solana from 45.148.10.240 port 37750
Jan 27 18:17:49 compute-0 sshd-session[29941]: Connection closed by invalid user solana 45.148.10.240 port 37750 [preauth]
Jan 27 18:18:25 compute-0 sshd-session[29943]: Accepted publickey for zuul from 38.102.83.144 port 33302 ssh2: RSA SHA256:jhFpR9mdpMGvU2F0q/HJAkqGxozs6TWh9oCwMxPPlpE
Jan 27 18:18:25 compute-0 systemd-logind[795]: New session 6 of user zuul.
Jan 27 18:18:25 compute-0 systemd[1]: Started Session 6 of User zuul.
Jan 27 18:18:25 compute-0 sshd-session[29943]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:18:26 compute-0 python3[30019]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:18:27 compute-0 sudo[30133]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxyqycuyobvkzzrzgbrfcxpxyphakfbn ; /usr/bin/python3'
Jan 27 18:18:27 compute-0 sudo[30133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:27 compute-0 python3[30135]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:18:27 compute-0 sudo[30133]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:28 compute-0 sudo[30206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pycswgvpseqhntvctvhdwctfjduxfdex ; /usr/bin/python3'
Jan 27 18:18:28 compute-0 sudo[30206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:28 compute-0 python3[30208]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769537907.609595-33623-53520657365451/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:18:28 compute-0 sudo[30206]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:28 compute-0 sudo[30232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byrxccxcdccnhasegmpxjlalhnejcema ; /usr/bin/python3'
Jan 27 18:18:28 compute-0 sudo[30232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:28 compute-0 python3[30234]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:18:28 compute-0 sudo[30232]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:28 compute-0 sudo[30305]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfbpmlqpruhsvsffcbijpqlihaemrzlv ; /usr/bin/python3'
Jan 27 18:18:28 compute-0 sudo[30305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:28 compute-0 python3[30307]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769537907.609595-33623-53520657365451/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:18:28 compute-0 sudo[30305]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:29 compute-0 sudo[30331]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqlkkmhdmloswmuqaawftqviyswvqwwm ; /usr/bin/python3'
Jan 27 18:18:29 compute-0 sudo[30331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:29 compute-0 python3[30333]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:18:29 compute-0 sudo[30331]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:29 compute-0 sudo[30404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkhesmjwnbewuncqkxmnqpvqdyxuczgi ; /usr/bin/python3'
Jan 27 18:18:29 compute-0 sudo[30404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:29 compute-0 python3[30406]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769537907.609595-33623-53520657365451/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:18:29 compute-0 sudo[30404]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:29 compute-0 sudo[30430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fajdavmstgsdczeqirkokrydokyhvyhj ; /usr/bin/python3'
Jan 27 18:18:29 compute-0 sudo[30430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:29 compute-0 python3[30432]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:18:29 compute-0 sudo[30430]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:29 compute-0 sudo[30503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clwbxkxjeqizjczkbxghpzpskbppixvw ; /usr/bin/python3'
Jan 27 18:18:29 compute-0 sudo[30503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:30 compute-0 python3[30505]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769537907.609595-33623-53520657365451/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:18:30 compute-0 sudo[30503]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:30 compute-0 sudo[30529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuukpkznwjqxzkxrlhxaipeebedbpmka ; /usr/bin/python3'
Jan 27 18:18:30 compute-0 sudo[30529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:30 compute-0 python3[30531]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:18:30 compute-0 sudo[30529]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:30 compute-0 sudo[30602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydquyloeodmuzppjiyzyhzhjunvodrp ; /usr/bin/python3'
Jan 27 18:18:30 compute-0 sudo[30602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:30 compute-0 python3[30604]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769537907.609595-33623-53520657365451/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:18:30 compute-0 sudo[30602]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:30 compute-0 sudo[30628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlgktkqfhjuxnawdvgwpmegufxnnhhim ; /usr/bin/python3'
Jan 27 18:18:30 compute-0 sudo[30628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:30 compute-0 python3[30630]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:18:30 compute-0 sudo[30628]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:31 compute-0 sudo[30701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqvoszrzmnjuqynwcjhbwgsvciysxbpd ; /usr/bin/python3'
Jan 27 18:18:31 compute-0 sudo[30701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:31 compute-0 python3[30703]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769537907.609595-33623-53520657365451/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:18:31 compute-0 sudo[30701]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:31 compute-0 sudo[30727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqglttreqeyhgkdkcjneisxxoqxalynu ; /usr/bin/python3'
Jan 27 18:18:31 compute-0 sudo[30727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:31 compute-0 python3[30729]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 18:18:31 compute-0 sudo[30727]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:31 compute-0 sudo[30800]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtichhohreetsospuseknxmwlqowmgum ; /usr/bin/python3'
Jan 27 18:18:31 compute-0 sudo[30800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:18:31 compute-0 python3[30802]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769537907.609595-33623-53520657365451/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:18:31 compute-0 sudo[30800]: pam_unix(sudo:session): session closed for user root
Jan 27 18:18:34 compute-0 sshd-session[30828]: Connection closed by 192.168.122.11 port 51988 [preauth]
Jan 27 18:18:34 compute-0 sshd-session[30827]: Connection closed by 192.168.122.11 port 51978 [preauth]
Jan 27 18:18:34 compute-0 sshd-session[30830]: Unable to negotiate with 192.168.122.11 port 52004: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 27 18:18:34 compute-0 sshd-session[30831]: Unable to negotiate with 192.168.122.11 port 52014: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 27 18:18:34 compute-0 sshd-session[30829]: Unable to negotiate with 192.168.122.11 port 52002: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 27 18:20:10 compute-0 sshd-session[30838]: Invalid user sol from 45.148.10.240 port 33048
Jan 27 18:20:10 compute-0 sshd-session[30838]: Connection closed by invalid user sol 45.148.10.240 port 33048 [preauth]
Jan 27 18:21:07 compute-0 sshd-session[30840]: Received disconnect from 45.148.10.151 port 50168:11:  [preauth]
Jan 27 18:21:07 compute-0 sshd-session[30840]: Disconnected from authenticating user root 45.148.10.151 port 50168 [preauth]
Jan 27 18:21:20 compute-0 python3[30865]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:22:25 compute-0 sshd-session[30868]: Invalid user ubuntu from 45.148.10.240 port 58670
Jan 27 18:22:25 compute-0 sshd-session[30868]: Connection closed by invalid user ubuntu 45.148.10.240 port 58670 [preauth]
Jan 27 18:24:39 compute-0 sshd-session[30871]: Invalid user ubuntu from 45.148.10.240 port 57880
Jan 27 18:24:39 compute-0 sshd-session[30871]: Connection closed by invalid user ubuntu 45.148.10.240 port 57880 [preauth]
Jan 27 18:26:20 compute-0 sshd-session[29946]: Received disconnect from 38.102.83.144 port 33302:11: disconnected by user
Jan 27 18:26:20 compute-0 sshd-session[29946]: Disconnected from user zuul 38.102.83.144 port 33302
Jan 27 18:26:20 compute-0 sshd-session[29943]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:26:20 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 27 18:26:20 compute-0 systemd[1]: session-6.scope: Consumed 4.646s CPU time.
Jan 27 18:26:20 compute-0 systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Jan 27 18:26:20 compute-0 systemd-logind[795]: Removed session 6.
Jan 27 18:26:52 compute-0 sshd-session[30873]: Invalid user sol from 45.148.10.240 port 60396
Jan 27 18:26:52 compute-0 sshd-session[30873]: Connection closed by invalid user sol 45.148.10.240 port 60396 [preauth]
Jan 27 18:28:27 compute-0 sshd-session[30876]: Received disconnect from 91.224.92.190 port 47466:11:  [preauth]
Jan 27 18:28:27 compute-0 sshd-session[30876]: Disconnected from authenticating user root 91.224.92.190 port 47466 [preauth]
Jan 27 18:29:05 compute-0 sshd-session[30878]: Invalid user solana from 45.148.10.240 port 39396
Jan 27 18:29:05 compute-0 sshd-session[30878]: Connection closed by invalid user solana 45.148.10.240 port 39396 [preauth]
Jan 27 18:29:23 compute-0 sshd-session[30880]: Connection closed by 14.63.166.251 port 41703 [preauth]
Jan 27 18:31:21 compute-0 sshd-session[30883]: Invalid user solana from 45.148.10.240 port 58904
Jan 27 18:31:21 compute-0 sshd-session[30883]: Connection closed by invalid user solana 45.148.10.240 port 58904 [preauth]
Jan 27 18:31:55 compute-0 sshd-session[30885]: Connection closed by 94.102.49.125 port 35616
Jan 27 18:33:38 compute-0 sshd-session[30886]: Invalid user sol from 45.148.10.240 port 39104
Jan 27 18:33:38 compute-0 sshd-session[30886]: Connection closed by invalid user sol 45.148.10.240 port 39104 [preauth]
Jan 27 18:33:47 compute-0 sshd-session[30888]: Accepted publickey for zuul from 192.168.122.31 port 57836 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:33:47 compute-0 systemd-logind[795]: New session 7 of user zuul.
Jan 27 18:33:47 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 27 18:33:47 compute-0 sshd-session[30888]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:33:49 compute-0 python3.9[31041]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:33:50 compute-0 sudo[31220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hscekljumzjifgdwnsiwazqiwgeregjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538829.9637656-27-28999435808877/AnsiballZ_command.py'
Jan 27 18:33:50 compute-0 sudo[31220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:33:50 compute-0 python3.9[31222]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:33:57 compute-0 sudo[31220]: pam_unix(sudo:session): session closed for user root
Jan 27 18:33:58 compute-0 sshd-session[30891]: Connection closed by 192.168.122.31 port 57836
Jan 27 18:33:58 compute-0 sshd-session[30888]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:33:58 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 27 18:33:58 compute-0 systemd[1]: session-7.scope: Consumed 8.113s CPU time.
Jan 27 18:33:58 compute-0 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Jan 27 18:33:58 compute-0 systemd-logind[795]: Removed session 7.
Jan 27 18:34:05 compute-0 sshd-session[31279]: Accepted publickey for zuul from 192.168.122.31 port 57822 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:34:05 compute-0 systemd-logind[795]: New session 8 of user zuul.
Jan 27 18:34:05 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 27 18:34:05 compute-0 sshd-session[31279]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:34:06 compute-0 python3.9[31432]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:34:07 compute-0 sshd-session[31282]: Connection closed by 192.168.122.31 port 57822
Jan 27 18:34:07 compute-0 sshd-session[31279]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:34:07 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 27 18:34:07 compute-0 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Jan 27 18:34:07 compute-0 systemd-logind[795]: Removed session 8.
Jan 27 18:34:23 compute-0 sshd-session[31460]: Accepted publickey for zuul from 192.168.122.31 port 56148 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:34:23 compute-0 systemd-logind[795]: New session 9 of user zuul.
Jan 27 18:34:23 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 27 18:34:23 compute-0 sshd-session[31460]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:34:24 compute-0 python3.9[31613]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 27 18:34:25 compute-0 python3.9[31787]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:34:26 compute-0 sudo[31937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmhmorxfuwlwivazoyernweuxsyiqquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538865.744395-40-7498940825223/AnsiballZ_command.py'
Jan 27 18:34:26 compute-0 sudo[31937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:26 compute-0 python3.9[31939]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:34:26 compute-0 sudo[31937]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:27 compute-0 sudo[32090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbedjpbsbjizrcghgbgkolordwkmwvar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538866.7898715-52-187866337000275/AnsiballZ_stat.py'
Jan 27 18:34:27 compute-0 sudo[32090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:27 compute-0 python3.9[32092]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:34:27 compute-0 sudo[32090]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:27 compute-0 sudo[32242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcrelsuvvyxjuvgiadhpxuvbsnsjmdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538867.5705109-60-139236617811413/AnsiballZ_file.py'
Jan 27 18:34:27 compute-0 sudo[32242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:28 compute-0 python3.9[32244]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:34:28 compute-0 sudo[32242]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:28 compute-0 sudo[32394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsxxzszeecyxhfhtclhuafhauhxygisi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538868.3192077-68-233407501074091/AnsiballZ_stat.py'
Jan 27 18:34:28 compute-0 sudo[32394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:28 compute-0 python3.9[32396]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:34:28 compute-0 sudo[32394]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:29 compute-0 sudo[32518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vflhytcefwlrhpeyynxycieekuhggofs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538868.3192077-68-233407501074091/AnsiballZ_copy.py'
Jan 27 18:34:29 compute-0 sudo[32518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:29 compute-0 python3.9[32520]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769538868.3192077-68-233407501074091/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:34:29 compute-0 sudo[32518]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:30 compute-0 sudo[32670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdzavoqhjebqcvapbfahcamcxepcwnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538869.7943933-83-225136968703100/AnsiballZ_setup.py'
Jan 27 18:34:30 compute-0 sudo[32670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:30 compute-0 python3.9[32672]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:34:30 compute-0 irqbalance[791]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 27 18:34:30 compute-0 irqbalance[791]: IRQ 26 affinity is now unmanaged
Jan 27 18:34:30 compute-0 sudo[32670]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:30 compute-0 sudo[32826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olbgejipqcpokbjkjalpevzjxfsegnhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538870.7413228-91-124475781230099/AnsiballZ_file.py'
Jan 27 18:34:31 compute-0 sudo[32826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:31 compute-0 python3.9[32828]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:34:31 compute-0 sudo[32826]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:31 compute-0 sudo[32978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdwljphkdfzlrpashsvzwciqijobtpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538871.3984728-100-195685524573609/AnsiballZ_file.py'
Jan 27 18:34:31 compute-0 sudo[32978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:31 compute-0 python3.9[32980]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:34:31 compute-0 sudo[32978]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:32 compute-0 python3.9[33130]: ansible-ansible.builtin.service_facts Invoked
Jan 27 18:34:36 compute-0 python3.9[33383]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:34:37 compute-0 python3.9[33533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:34:38 compute-0 python3.9[33687]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:34:38 compute-0 sudo[33843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssimlibikbnnxvfzrqfwxvycbufdzfza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538878.4820359-148-86217896808539/AnsiballZ_setup.py'
Jan 27 18:34:38 compute-0 sudo[33843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:39 compute-0 python3.9[33845]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:34:39 compute-0 sudo[33843]: pam_unix(sudo:session): session closed for user root
Jan 27 18:34:39 compute-0 sudo[33927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuvdfmxlyogobjwnihvxfkmgovdgcgmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538878.4820359-148-86217896808539/AnsiballZ_dnf.py'
Jan 27 18:34:39 compute-0 sudo[33927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:34:39 compute-0 python3.9[33929]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:35:25 compute-0 systemd[1]: Reloading.
Jan 27 18:35:25 compute-0 systemd-rc-local-generator[34127]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:35:25 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 27 18:35:25 compute-0 systemd[1]: Reloading.
Jan 27 18:35:25 compute-0 systemd-rc-local-generator[34173]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:35:25 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 27 18:35:26 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 27 18:35:26 compute-0 systemd[1]: Reloading.
Jan 27 18:35:26 compute-0 systemd-rc-local-generator[34214]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:35:26 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 27 18:35:26 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 27 18:35:26 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 27 18:35:26 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 27 18:35:55 compute-0 sshd-session[34339]: Invalid user sol from 45.148.10.240 port 37568
Jan 27 18:35:55 compute-0 sshd-session[34339]: Connection closed by invalid user sol 45.148.10.240 port 37568 [preauth]
Jan 27 18:36:13 compute-0 sshd-session[34395]: Received disconnect from 91.224.92.108 port 17076:11:  [preauth]
Jan 27 18:36:13 compute-0 sshd-session[34395]: Disconnected from authenticating user root 91.224.92.108 port 17076 [preauth]
Jan 27 18:36:33 compute-0 kernel: SELinux:  Converting 2723 SID table entries...
Jan 27 18:36:33 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:36:33 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 18:36:33 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:36:33 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:36:33 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:36:33 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:36:33 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:36:33 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 27 18:36:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:36:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:36:33 compute-0 systemd[1]: Reloading.
Jan 27 18:36:33 compute-0 systemd-rc-local-generator[34540]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:36:33 compute-0 systemd[1]: Starting dnf makecache...
Jan 27 18:36:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:36:34 compute-0 dnf[34556]: Failed determining last makecache time.
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-barbican-42b4c41831408a8e323 106 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 187 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-cinder-1c00d6490d88e436f26ef 207 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-python-stevedore-c4acc5639fd2329372142 180 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-python-cloudkitty-tests-tempest-2c80f8 208 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-os-refresh-config-9bfc52b5049be2d8de61 190 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 190 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-python-designate-tests-tempest-347fdbc 194 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-glance-1fd12c29b339f30fe823e 182 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 187 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 sudo[33927]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-manila-3c01b7181572c95dac462 163 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-python-whitebox-neutron-tests-tempest- 183 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-octavia-ba397f07a7331190208c 173 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-watcher-c014f81a8647287f6dcc 180 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-ansible-config_template-5ccaa22121a7ff 157 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 205 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-swift-dc98a8463506ac520c469a 199 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-python-tempestconf-8515371b7cceebd4282 207 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: delorean-openstack-heat-ui-013accbfd179753bc3f0 176 kB/s | 3.0 kB     00:00
Jan 27 18:36:34 compute-0 dnf[34556]: CentOS Stream 9 - BaseOS                         29 kB/s | 6.7 kB     00:00
Jan 27 18:36:34 compute-0 sudo[35475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnvksirtwtxhclhzbgyqpvuumloblqxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538994.4817147-160-251982364494871/AnsiballZ_command.py'
Jan 27 18:36:34 compute-0 sudo[35475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:36:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:36:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.285s CPU time.
Jan 27 18:36:34 compute-0 systemd[1]: run-r3f05adaa294c4976bf5f1c65f65f7573.service: Deactivated successfully.
Jan 27 18:36:34 compute-0 dnf[34556]: CentOS Stream 9 - AppStream                      65 kB/s | 6.8 kB     00:00
Jan 27 18:36:34 compute-0 python3.9[35478]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:36:35 compute-0 dnf[34556]: CentOS Stream 9 - CRB                            67 kB/s | 6.6 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: CentOS Stream 9 - Extras packages                69 kB/s | 7.3 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: dlrn-antelope-testing                           101 kB/s | 3.0 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: dlrn-antelope-build-deps                        105 kB/s | 3.0 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: centos9-rabbitmq                                 96 kB/s | 3.0 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: centos9-storage                                 116 kB/s | 3.0 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: centos9-opstools                                100 kB/s | 3.0 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: NFV SIG OpenvSwitch                             118 kB/s | 3.0 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: repo-setup-centos-appstream                     155 kB/s | 4.4 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: repo-setup-centos-baseos                        167 kB/s | 3.9 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: repo-setup-centos-highavailability              149 kB/s | 3.9 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: repo-setup-centos-powertools                    132 kB/s | 4.3 kB     00:00
Jan 27 18:36:35 compute-0 dnf[34556]: Extra Packages for Enterprise Linux 9 - x86_64  207 kB/s |  28 kB     00:00
Jan 27 18:36:36 compute-0 sudo[35475]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:36 compute-0 dnf[34556]: Metadata cache created.
Jan 27 18:36:36 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 27 18:36:36 compute-0 systemd[1]: Finished dnf makecache.
Jan 27 18:36:36 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.904s CPU time.
Jan 27 18:36:36 compute-0 sudo[35780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrykqeiijnmrvwsoazqlywzkzlgnbfuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538996.2462463-168-499184506104/AnsiballZ_selinux.py'
Jan 27 18:36:36 compute-0 sudo[35780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:37 compute-0 python3.9[35782]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 27 18:36:37 compute-0 sudo[35780]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:37 compute-0 sudo[35932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbbqooduuoikjikxhgvwkaxkdiwjyqfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538997.5209582-179-165531828828915/AnsiballZ_command.py'
Jan 27 18:36:37 compute-0 sudo[35932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:37 compute-0 python3.9[35934]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 27 18:36:39 compute-0 sudo[35932]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:39 compute-0 sudo[36085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljfefsktqiyzckdgutpjdrwmdikownxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769538999.7099495-187-39682592294609/AnsiballZ_file.py'
Jan 27 18:36:39 compute-0 sudo[36085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:42 compute-0 python3.9[36087]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:36:42 compute-0 sudo[36085]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:43 compute-0 sudo[36237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znozamksxvinmhaisuzawmrlpjxqqpyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539002.7908516-195-74678522359839/AnsiballZ_mount.py'
Jan 27 18:36:43 compute-0 sudo[36237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:43 compute-0 python3.9[36239]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 27 18:36:43 compute-0 sudo[36237]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:44 compute-0 sudo[36389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqgoeddnvukvsfazugjdjgwszikzveou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539004.2408605-223-222265980640702/AnsiballZ_file.py'
Jan 27 18:36:44 compute-0 sudo[36389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:44 compute-0 python3.9[36391]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:36:44 compute-0 sudo[36389]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:45 compute-0 sudo[36541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaaqpzukxcbcrneiymggqzujknjilxxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539004.9222672-231-102174139117162/AnsiballZ_stat.py'
Jan 27 18:36:45 compute-0 sudo[36541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:45 compute-0 python3.9[36543]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:36:45 compute-0 sudo[36541]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:45 compute-0 sudo[36664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohcujqpfufzlxmrkexrbpwqirhmgewld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539004.9222672-231-102174139117162/AnsiballZ_copy.py'
Jan 27 18:36:45 compute-0 sudo[36664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:45 compute-0 python3.9[36666]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539004.9222672-231-102174139117162/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:36:45 compute-0 sudo[36664]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:46 compute-0 sudo[36816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eejpqqgtbhivujaolhudsykegfjehace ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539006.4166281-255-110454658359263/AnsiballZ_stat.py'
Jan 27 18:36:46 compute-0 sudo[36816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:46 compute-0 python3.9[36818]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:36:46 compute-0 sudo[36816]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:47 compute-0 sudo[36968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chqdpwrincszhhdiqnrbcammkwsmjgzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539007.1270475-263-224347763301956/AnsiballZ_command.py'
Jan 27 18:36:47 compute-0 sudo[36968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:47 compute-0 python3.9[36970]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:36:47 compute-0 sudo[36968]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:48 compute-0 sudo[37121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzunwfilxinuvrghopjxncpxxkhygxlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539007.9782689-271-172378808987945/AnsiballZ_file.py'
Jan 27 18:36:48 compute-0 sudo[37121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:48 compute-0 python3.9[37123]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:36:48 compute-0 sudo[37121]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:49 compute-0 sudo[37273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phpgrmcgtassmvqsvzfcmwftdqmvmcrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539008.8656387-282-248203713236626/AnsiballZ_getent.py'
Jan 27 18:36:49 compute-0 sudo[37273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:49 compute-0 python3.9[37275]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 27 18:36:49 compute-0 sudo[37273]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 18:36:50 compute-0 sudo[37427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdbdpzalqlsqnfpfyxehugyxwhgrxstn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539009.7024913-290-218050297150933/AnsiballZ_group.py'
Jan 27 18:36:50 compute-0 sudo[37427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:50 compute-0 python3.9[37429]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 18:36:50 compute-0 groupadd[37430]: group added to /etc/group: name=qemu, GID=107
Jan 27 18:36:50 compute-0 groupadd[37430]: group added to /etc/gshadow: name=qemu
Jan 27 18:36:50 compute-0 groupadd[37430]: new group: name=qemu, GID=107
Jan 27 18:36:50 compute-0 sudo[37427]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:51 compute-0 sudo[37585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxrjpzugxsfdqfkbhrnhddcrsotqtidf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539010.6018407-298-152826543236644/AnsiballZ_user.py'
Jan 27 18:36:51 compute-0 sudo[37585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:51 compute-0 python3.9[37587]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 18:36:51 compute-0 useradd[37589]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 18:36:51 compute-0 sudo[37585]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:51 compute-0 sudo[37745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpgxehngmhbwhiyqmmqcsveqlkrciwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539011.5416262-306-139641952412226/AnsiballZ_getent.py'
Jan 27 18:36:51 compute-0 sudo[37745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:52 compute-0 python3.9[37747]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 27 18:36:52 compute-0 sudo[37745]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:52 compute-0 sudo[37898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guueychuexrrimtjpfsnzvwlrjyhrver ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539012.237065-314-266327500166529/AnsiballZ_group.py'
Jan 27 18:36:52 compute-0 sudo[37898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:52 compute-0 python3.9[37900]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 18:36:52 compute-0 groupadd[37901]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 27 18:36:52 compute-0 groupadd[37901]: group added to /etc/gshadow: name=hugetlbfs
Jan 27 18:36:52 compute-0 groupadd[37901]: new group: name=hugetlbfs, GID=42477
Jan 27 18:36:52 compute-0 sudo[37898]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:53 compute-0 sudo[38056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvyzvunlzlynwrdnquggosllasulmyyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539012.9253867-323-100810281450681/AnsiballZ_file.py'
Jan 27 18:36:53 compute-0 sudo[38056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:53 compute-0 python3.9[38058]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 27 18:36:53 compute-0 sudo[38056]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:53 compute-0 sudo[38208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdwvuzdtxbjhblwfjiqfdwycqfkdgnss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539013.721149-334-70582645345602/AnsiballZ_dnf.py'
Jan 27 18:36:54 compute-0 sudo[38208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:54 compute-0 python3.9[38210]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:36:56 compute-0 sudo[38208]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:57 compute-0 sudo[38362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxmzmgrgipyucopzcrlgvsfoaxesnppi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539016.8838198-342-257638223328965/AnsiballZ_file.py'
Jan 27 18:36:57 compute-0 sudo[38362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:57 compute-0 python3.9[38364]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:36:57 compute-0 sudo[38362]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:57 compute-0 sudo[38514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrlwuqyqadbcluvweijexyzobhurxmof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539017.4824574-350-59243187843877/AnsiballZ_stat.py'
Jan 27 18:36:57 compute-0 sudo[38514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:57 compute-0 python3.9[38516]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:36:57 compute-0 sudo[38514]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:58 compute-0 sudo[38637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrmtivcrroqfnlzhlryrgtvyezjlrope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539017.4824574-350-59243187843877/AnsiballZ_copy.py'
Jan 27 18:36:58 compute-0 sudo[38637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:58 compute-0 python3.9[38639]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539017.4824574-350-59243187843877/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:36:58 compute-0 sudo[38637]: pam_unix(sudo:session): session closed for user root
Jan 27 18:36:59 compute-0 sudo[38789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcyuqvslzmlbsvpvneabhgvehlhiacei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539018.6804605-365-261062391612281/AnsiballZ_systemd.py'
Jan 27 18:36:59 compute-0 sudo[38789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:36:59 compute-0 python3.9[38791]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:36:59 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 18:36:59 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 27 18:36:59 compute-0 kernel: Bridge firewalling registered
Jan 27 18:36:59 compute-0 systemd-modules-load[38795]: Inserted module 'br_netfilter'
Jan 27 18:36:59 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 18:36:59 compute-0 sudo[38789]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:00 compute-0 sudo[38949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwyyukkxiabrhtbhnqjwlsbqnqnchrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539019.8920572-373-160899984232690/AnsiballZ_stat.py'
Jan 27 18:37:00 compute-0 sudo[38949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:00 compute-0 python3.9[38951]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:37:00 compute-0 sudo[38949]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:00 compute-0 sudo[39072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzaknksknhnvbqzlsdmrigdrkfuctrci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539019.8920572-373-160899984232690/AnsiballZ_copy.py'
Jan 27 18:37:00 compute-0 sudo[39072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:00 compute-0 python3.9[39074]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539019.8920572-373-160899984232690/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:37:00 compute-0 sudo[39072]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:01 compute-0 sudo[39224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcugcfafzczjnotdpeqkxscuyngrzjxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539021.2473142-391-131803698063710/AnsiballZ_dnf.py'
Jan 27 18:37:01 compute-0 sudo[39224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:01 compute-0 python3.9[39226]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:37:04 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 27 18:37:05 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 27 18:37:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:37:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:37:05 compute-0 systemd[1]: Reloading.
Jan 27 18:37:05 compute-0 systemd-rc-local-generator[39291]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:37:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:37:06 compute-0 sudo[39224]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:06 compute-0 python3.9[40603]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:37:07 compute-0 python3.9[41560]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 27 18:37:08 compute-0 python3.9[42254]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:37:08 compute-0 sudo[43042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xytaewtocojgjznadixmwiqmffnpidtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539028.5529423-430-157168147936127/AnsiballZ_command.py'
Jan 27 18:37:08 compute-0 sudo[43042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:09 compute-0 python3.9[43055]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:37:09 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 18:37:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:37:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:37:09 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.114s CPU time.
Jan 27 18:37:09 compute-0 systemd[1]: run-r38a7d4345fa24103a0e5a09cb675828a.service: Deactivated successfully.
Jan 27 18:37:09 compute-0 systemd[1]: Starting Authorization Manager...
Jan 27 18:37:09 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 18:37:09 compute-0 polkitd[43613]: Started polkitd version 0.117
Jan 27 18:37:09 compute-0 polkitd[43613]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 18:37:09 compute-0 polkitd[43613]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 18:37:09 compute-0 polkitd[43613]: Finished loading, compiling and executing 2 rules
Jan 27 18:37:09 compute-0 systemd[1]: Started Authorization Manager.
Jan 27 18:37:09 compute-0 polkitd[43613]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 27 18:37:09 compute-0 sudo[43042]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:10 compute-0 sudo[43781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgfkqmwapctvgvekjhzuuxosimlhjewe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539030.0363255-439-44186753369558/AnsiballZ_systemd.py'
Jan 27 18:37:10 compute-0 sudo[43781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:10 compute-0 python3.9[43783]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:37:10 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 27 18:37:10 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 27 18:37:10 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 27 18:37:10 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 18:37:10 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 18:37:10 compute-0 sudo[43781]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:11 compute-0 python3.9[43945]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 27 18:37:13 compute-0 sudo[44095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqqttclosnqljgyzaldzctqozpclsou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539033.035024-496-83148922825998/AnsiballZ_systemd.py'
Jan 27 18:37:13 compute-0 sudo[44095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:13 compute-0 python3.9[44097]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:37:13 compute-0 systemd[1]: Reloading.
Jan 27 18:37:13 compute-0 systemd-rc-local-generator[44125]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:37:13 compute-0 sudo[44095]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:14 compute-0 sudo[44283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxdoljaywkcawlfmgatzmiteukrdmobv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539034.009908-496-87655356895832/AnsiballZ_systemd.py'
Jan 27 18:37:14 compute-0 sudo[44283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:14 compute-0 python3.9[44285]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:37:14 compute-0 systemd[1]: Reloading.
Jan 27 18:37:14 compute-0 systemd-rc-local-generator[44317]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:37:14 compute-0 sudo[44283]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:15 compute-0 sudo[44472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeyfxyrxddlkplgefbfmnbbrwrwyocqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539035.0940788-512-92942963502960/AnsiballZ_command.py'
Jan 27 18:37:15 compute-0 sudo[44472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:15 compute-0 python3.9[44474]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:37:15 compute-0 sudo[44472]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:16 compute-0 sudo[44625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqhialrktovfayiqdcoihfljyjbicvhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539035.811501-520-104768632397994/AnsiballZ_command.py'
Jan 27 18:37:16 compute-0 sudo[44625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:16 compute-0 python3.9[44627]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:37:16 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 27 18:37:16 compute-0 sudo[44625]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:16 compute-0 sudo[44778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjqpadntygpkrarjyrvxzrwepnwztyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539036.5427012-528-129116692509959/AnsiballZ_command.py'
Jan 27 18:37:16 compute-0 sudo[44778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:16 compute-0 python3.9[44780]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:37:18 compute-0 sudo[44778]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:18 compute-0 sudo[44940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yskucfoqjatsvbjezvfsacpiskncwotr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539038.623108-536-246721584433284/AnsiballZ_command.py'
Jan 27 18:37:18 compute-0 sudo[44940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:19 compute-0 python3.9[44942]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:37:19 compute-0 sudo[44940]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:19 compute-0 sudo[45093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhubpwyxdfjbfnvzywrsxmzkvtxiukan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539039.24387-544-126091483570287/AnsiballZ_systemd.py'
Jan 27 18:37:19 compute-0 sudo[45093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:19 compute-0 python3.9[45095]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:37:19 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 18:37:19 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 27 18:37:19 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 27 18:37:19 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 27 18:37:19 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 18:37:19 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 27 18:37:19 compute-0 sudo[45093]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:20 compute-0 sshd-session[31463]: Connection closed by 192.168.122.31 port 56148
Jan 27 18:37:20 compute-0 sshd-session[31460]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:37:20 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 27 18:37:20 compute-0 systemd[1]: session-9.scope: Consumed 2min 24.658s CPU time.
Jan 27 18:37:20 compute-0 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Jan 27 18:37:20 compute-0 systemd-logind[795]: Removed session 9.
Jan 27 18:37:27 compute-0 sshd-session[45125]: Accepted publickey for zuul from 192.168.122.31 port 55534 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:37:27 compute-0 systemd-logind[795]: New session 10 of user zuul.
Jan 27 18:37:27 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 27 18:37:27 compute-0 sshd-session[45125]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:37:28 compute-0 python3.9[45278]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:37:29 compute-0 python3.9[45432]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:37:30 compute-0 sudo[45586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmixlizafcprezuaulljvljszqmttlls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539049.8857167-45-128486241501962/AnsiballZ_command.py'
Jan 27 18:37:30 compute-0 sudo[45586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:30 compute-0 python3.9[45588]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:37:30 compute-0 sudo[45586]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:31 compute-0 python3.9[45739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:37:32 compute-0 sudo[45893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpixencqfjpkxjnykvecpqwkmlcydhce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539051.7799428-65-280804889444140/AnsiballZ_setup.py'
Jan 27 18:37:32 compute-0 sudo[45893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:32 compute-0 python3.9[45895]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:37:32 compute-0 sudo[45893]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:32 compute-0 sudo[45977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolgkvdkrrcgcqofurmwfbgfirtomsmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539051.7799428-65-280804889444140/AnsiballZ_dnf.py'
Jan 27 18:37:32 compute-0 sudo[45977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:33 compute-0 python3.9[45979]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:37:34 compute-0 sudo[45977]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:35 compute-0 sudo[46130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppegwennuhukwvflunalqoyuyagyxcgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539054.8343756-77-57690806610089/AnsiballZ_setup.py'
Jan 27 18:37:35 compute-0 sudo[46130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:35 compute-0 python3.9[46132]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:37:35 compute-0 sudo[46130]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:36 compute-0 sudo[46301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehkjsymimcdnexjhobfctxqxwrjseccd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539055.7367082-88-151264988870979/AnsiballZ_file.py'
Jan 27 18:37:36 compute-0 sudo[46301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:36 compute-0 python3.9[46303]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:37:36 compute-0 sudo[46301]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:36 compute-0 sudo[46453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlrqrxivzcsdbglnqovywouzayysiqej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539056.5144067-96-201293354251018/AnsiballZ_command.py'
Jan 27 18:37:36 compute-0 sudo[46453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:36 compute-0 python3.9[46455]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:37:37 compute-0 podman[46456]: 2026-01-27 18:37:37.044588533 +0000 UTC m=+0.045265038 system refresh
Jan 27 18:37:37 compute-0 sudo[46453]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:37 compute-0 sudo[46617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpdukgbzjhevzcxjyaarvnycpaszzdja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539057.2339146-104-103100859885058/AnsiballZ_stat.py'
Jan 27 18:37:37 compute-0 sudo[46617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:37 compute-0 python3.9[46619]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:37:37 compute-0 sudo[46617]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:37:38 compute-0 sudo[46740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klqdxjarsqatwfndakjwdhwpxpwsgyef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539057.2339146-104-103100859885058/AnsiballZ_copy.py'
Jan 27 18:37:38 compute-0 sudo[46740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:38 compute-0 python3.9[46742]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539057.2339146-104-103100859885058/.source.json follow=False _original_basename=podman_network_config.j2 checksum=2820130079b6952e2cd91513077114bc0449ae2a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:37:38 compute-0 sudo[46740]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:38 compute-0 sudo[46892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sviqkuvzspqofwqztvfthrdqvddqeggd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539058.6356199-119-32989268152029/AnsiballZ_stat.py'
Jan 27 18:37:38 compute-0 sudo[46892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:39 compute-0 python3.9[46894]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:37:39 compute-0 sudo[46892]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:39 compute-0 sudo[47015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kianksomdsdjuwqbayytixkeaweigzky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539058.6356199-119-32989268152029/AnsiballZ_copy.py'
Jan 27 18:37:39 compute-0 sudo[47015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:39 compute-0 python3.9[47017]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539058.6356199-119-32989268152029/.source.conf follow=False _original_basename=registries.conf.j2 checksum=1be7269277ffb3d8266fb723db456e35c93aa504 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:37:39 compute-0 sudo[47015]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:40 compute-0 sudo[47167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakrzewcpauaggxutfybnrqggciztvjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539059.967393-135-196668100605996/AnsiballZ_ini_file.py'
Jan 27 18:37:40 compute-0 sudo[47167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:40 compute-0 python3.9[47169]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:37:40 compute-0 sudo[47167]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:41 compute-0 sudo[47319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhaichpkwrkdgwuqvpnwfuhaamlvrbdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539060.835321-135-153110594103070/AnsiballZ_ini_file.py'
Jan 27 18:37:41 compute-0 sudo[47319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:41 compute-0 python3.9[47321]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:37:41 compute-0 sudo[47319]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:41 compute-0 sudo[47471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsbapbcklhulwbwvuzpyzzkzkkbmclbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539061.5567214-135-99282199833384/AnsiballZ_ini_file.py'
Jan 27 18:37:41 compute-0 sudo[47471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:42 compute-0 python3.9[47473]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:37:42 compute-0 sudo[47471]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:42 compute-0 sudo[47623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luzsqxyjzufeehovvfxbaflzfjvdwryb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539062.286164-135-63914392399735/AnsiballZ_ini_file.py'
Jan 27 18:37:42 compute-0 sudo[47623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:42 compute-0 python3.9[47625]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:37:42 compute-0 sudo[47623]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:43 compute-0 python3.9[47775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:37:44 compute-0 sudo[47927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjozquyhngamumkvmlchzefnauwiboht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539063.910993-175-211581386318243/AnsiballZ_dnf.py'
Jan 27 18:37:44 compute-0 sudo[47927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:44 compute-0 python3.9[47929]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:37:45 compute-0 sudo[47927]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:46 compute-0 sudo[48080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqaacwdzcaflrzivdirxlaewwvoimgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539065.9451895-183-6874207016167/AnsiballZ_dnf.py'
Jan 27 18:37:46 compute-0 sudo[48080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:46 compute-0 python3.9[48082]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:37:48 compute-0 sudo[48080]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:48 compute-0 sudo[48240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgukpqyirraozxeydxwgapaoomtglkdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539068.5062346-193-252942960022193/AnsiballZ_dnf.py'
Jan 27 18:37:48 compute-0 sudo[48240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:49 compute-0 python3.9[48242]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:37:50 compute-0 sudo[48240]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:50 compute-0 sudo[48393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odxbrhkoriraddwzcbdqswdyzygjobje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539070.660106-202-183989207803195/AnsiballZ_dnf.py'
Jan 27 18:37:50 compute-0 sudo[48393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:51 compute-0 python3.9[48395]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:37:52 compute-0 sudo[48393]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:53 compute-0 sudo[48546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftyifssqhbnrxyrqhbxyxqwpyvrdnpgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539072.7512834-213-52837153119241/AnsiballZ_dnf.py'
Jan 27 18:37:53 compute-0 sudo[48546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:53 compute-0 python3.9[48548]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:37:54 compute-0 sudo[48546]: pam_unix(sudo:session): session closed for user root
Jan 27 18:37:55 compute-0 sudo[48702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjyvgleqiedjndakylyamaanfazpqtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539074.962984-221-148647759413553/AnsiballZ_dnf.py'
Jan 27 18:37:55 compute-0 sudo[48702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:37:55 compute-0 python3.9[48704]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:38:00 compute-0 sudo[48702]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:01 compute-0 sudo[48871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzkpjwobsaveaemudklojwheiublwihp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539081.1846309-230-274690193244037/AnsiballZ_dnf.py'
Jan 27 18:38:01 compute-0 sudo[48871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:01 compute-0 python3.9[48873]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:38:02 compute-0 sudo[48871]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:03 compute-0 sudo[49024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpvovsqfczatgtkvcfjxdadasokfzrpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539083.2235842-239-23719454434024/AnsiballZ_dnf.py'
Jan 27 18:38:03 compute-0 sudo[49024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:03 compute-0 python3.9[49026]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:38:10 compute-0 sshd-session[49041]: Invalid user solana from 45.148.10.240 port 53348
Jan 27 18:38:10 compute-0 sshd-session[49041]: Connection closed by invalid user solana 45.148.10.240 port 53348 [preauth]
Jan 27 18:38:14 compute-0 sudo[49024]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:15 compute-0 sudo[49362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zblgdjxjnwnaqxymqvwiskntxvlfppkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539095.058035-248-250267298816129/AnsiballZ_dnf.py'
Jan 27 18:38:15 compute-0 sudo[49362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:15 compute-0 python3.9[49364]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:38:17 compute-0 sudo[49362]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:17 compute-0 sudo[49518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmcnoslxjzzdlmhhrnnjkkrncpciwkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539097.2987187-258-160112937737199/AnsiballZ_dnf.py'
Jan 27 18:38:17 compute-0 sudo[49518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:17 compute-0 python3.9[49520]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:38:19 compute-0 sudo[49518]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:19 compute-0 sudo[49675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmnclvovrsujplkaqawrqlsummuawolz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539099.6832488-269-105903422967660/AnsiballZ_file.py'
Jan 27 18:38:19 compute-0 sudo[49675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:20 compute-0 python3.9[49677]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:38:20 compute-0 sudo[49675]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:20 compute-0 sudo[49850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ducacmwcvcrdveryrokncbkrapqsuhtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539100.3333488-277-55818300104303/AnsiballZ_stat.py'
Jan 27 18:38:20 compute-0 sudo[49850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:20 compute-0 python3.9[49852]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:38:20 compute-0 sudo[49850]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:21 compute-0 sudo[49973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkibirikmourqqceejrxyowqlldihdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539100.3333488-277-55818300104303/AnsiballZ_copy.py'
Jan 27 18:38:21 compute-0 sudo[49973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:21 compute-0 python3.9[49975]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769539100.3333488-277-55818300104303/.source.json _original_basename=.lfj7g_7u follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:38:21 compute-0 sudo[49973]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:22 compute-0 sudo[50125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyzznbjbldubnxnwzmtkbpvembzfyjvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539101.7327874-295-170883386110144/AnsiballZ_podman_image.py'
Jan 27 18:38:22 compute-0 sudo[50125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:22 compute-0 python3.9[50127]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 18:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2194616910-lower\x2dmapped.mount: Deactivated successfully.
Jan 27 18:38:28 compute-0 podman[50140]: 2026-01-27 18:38:28.640166164 +0000 UTC m=+6.156939903 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 18:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:28 compute-0 sudo[50125]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:29 compute-0 sudo[50433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tivjullfrrcdtkajyyvmpfsrnsihujud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539109.2080314-306-246397264914733/AnsiballZ_podman_image.py'
Jan 27 18:38:29 compute-0 sudo[50433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:29 compute-0 python3.9[50435]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 18:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:39 compute-0 podman[50447]: 2026-01-27 18:38:39.093507533 +0000 UTC m=+9.326046937 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 18:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:39 compute-0 sudo[50433]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:39 compute-0 sudo[50744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhmiznnwsfzlqhmtpowviigucbarvdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539119.5645344-316-200107711699293/AnsiballZ_podman_image.py'
Jan 27 18:38:39 compute-0 sudo[50744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:39 compute-0 python3.9[50746]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 18:38:50 compute-0 podman[50759]: 2026-01-27 18:38:50.616983099 +0000 UTC m=+10.572378594 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 18:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:38:50 compute-0 sudo[50744]: pam_unix(sudo:session): session closed for user root
Jan 27 18:38:51 compute-0 sudo[51029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbgavgziyxnnnfetskaghpmtfvjbwivi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539131.1882238-327-108734652923478/AnsiballZ_podman_image.py'
Jan 27 18:38:51 compute-0 sudo[51029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:38:51 compute-0 python3.9[51031]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 18:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:05 compute-0 podman[51043]: 2026-01-27 18:39:05.084054654 +0000 UTC m=+13.319236175 image pull 68a60f9093568ce7a1c5b4524fb1e8f03692d56fcec899fd30bbb31f7cc46992 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 27 18:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:05 compute-0 sudo[51029]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:05 compute-0 sudo[51369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybphnmdyhsjlmtigctsiabcmrqzeqwio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539145.553534-327-168743944402095/AnsiballZ_podman_image.py'
Jan 27 18:39:05 compute-0 sudo[51369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:06 compute-0 python3.9[51371]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 18:39:07 compute-0 podman[51383]: 2026-01-27 18:39:07.290321788 +0000 UTC m=+1.193083477 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 27 18:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:07 compute-0 sudo[51369]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:07 compute-0 sudo[51655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgsbcwjrnkexkltfklmnrozgqskfscyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539147.7370913-343-72181667484267/AnsiballZ_podman_image.py'
Jan 27 18:39:08 compute-0 sudo[51655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:08 compute-0 python3.9[51657]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 18:39:11 compute-0 podman[51669]: 2026-01-27 18:39:11.322386818 +0000 UTC m=+3.074882130 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 27 18:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:11 compute-0 sudo[51655]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:11 compute-0 sudo[51925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqhsgwighhtsormvufyauddrgtyowevg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539151.700872-343-78614712404431/AnsiballZ_podman_image.py'
Jan 27 18:39:11 compute-0 sudo[51925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:12 compute-0 python3.9[51927]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 18:39:19 compute-0 podman[51941]: 2026-01-27 18:39:19.097943559 +0000 UTC m=+6.855509220 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 27 18:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:39:19 compute-0 sudo[51925]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:19 compute-0 sshd-session[45128]: Connection closed by 192.168.122.31 port 55534
Jan 27 18:39:19 compute-0 sshd-session[45125]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:39:19 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 27 18:39:19 compute-0 systemd[1]: session-10.scope: Consumed 2min 37.678s CPU time.
Jan 27 18:39:19 compute-0 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Jan 27 18:39:19 compute-0 systemd-logind[795]: Removed session 10.
Jan 27 18:39:25 compute-0 sshd-session[52191]: Accepted publickey for zuul from 192.168.122.31 port 35488 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:39:25 compute-0 systemd-logind[795]: New session 11 of user zuul.
Jan 27 18:39:25 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 27 18:39:25 compute-0 sshd-session[52191]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:39:26 compute-0 python3.9[52344]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:39:27 compute-0 sudo[52498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lydhaddssrrxlysqjgrzklxaokvaygbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539167.0374234-31-94480502765916/AnsiballZ_getent.py'
Jan 27 18:39:27 compute-0 sudo[52498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:27 compute-0 python3.9[52500]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 27 18:39:27 compute-0 sudo[52498]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:28 compute-0 sudo[52651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxhhyvsesfhhvsmkdsavbzokkrapgcbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539167.8663516-39-181522995715048/AnsiballZ_group.py'
Jan 27 18:39:28 compute-0 sudo[52651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:28 compute-0 python3.9[52653]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 18:39:28 compute-0 groupadd[52654]: group added to /etc/group: name=openvswitch, GID=42476
Jan 27 18:39:28 compute-0 groupadd[52654]: group added to /etc/gshadow: name=openvswitch
Jan 27 18:39:28 compute-0 groupadd[52654]: new group: name=openvswitch, GID=42476
Jan 27 18:39:28 compute-0 sudo[52651]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:29 compute-0 sudo[52809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsjwfyoisujvvmstjxdgxbwgxmxbdcja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539168.7926054-47-146925115198412/AnsiballZ_user.py'
Jan 27 18:39:29 compute-0 sudo[52809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:29 compute-0 python3.9[52811]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 18:39:29 compute-0 useradd[52813]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 18:39:29 compute-0 useradd[52813]: add 'openvswitch' to group 'hugetlbfs'
Jan 27 18:39:29 compute-0 useradd[52813]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 27 18:39:29 compute-0 sudo[52809]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:30 compute-0 sudo[52969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxthhsqejkvttyspdeskfowxnutgkxun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539169.8154511-57-139748902455678/AnsiballZ_setup.py'
Jan 27 18:39:30 compute-0 sudo[52969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:30 compute-0 python3.9[52971]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:39:30 compute-0 sudo[52969]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:31 compute-0 sudo[53053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziptvdrdnlifsnnfvgqtwyfhzknncqfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539169.8154511-57-139748902455678/AnsiballZ_dnf.py'
Jan 27 18:39:31 compute-0 sudo[53053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:31 compute-0 python3.9[53055]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:39:33 compute-0 sudo[53053]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:33 compute-0 sudo[53215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlqcwbclprtsdgtgnsfzrczeefyjcpio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539173.431035-71-253589053990209/AnsiballZ_dnf.py'
Jan 27 18:39:33 compute-0 sudo[53215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:33 compute-0 python3.9[53217]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:39:48 compute-0 kernel: SELinux:  Converting 2737 SID table entries...
Jan 27 18:39:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:39:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 18:39:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:39:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:39:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:39:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:39:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:39:48 compute-0 groupadd[53240]: group added to /etc/group: name=unbound, GID=994
Jan 27 18:39:48 compute-0 groupadd[53240]: group added to /etc/gshadow: name=unbound
Jan 27 18:39:48 compute-0 groupadd[53240]: new group: name=unbound, GID=994
Jan 27 18:39:48 compute-0 useradd[53247]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 27 18:39:49 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 27 18:39:49 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 27 18:39:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:39:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:39:50 compute-0 systemd[1]: Reloading.
Jan 27 18:39:50 compute-0 systemd-rc-local-generator[53741]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:39:50 compute-0 systemd-sysv-generator[53746]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:39:50 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:39:51 compute-0 sudo[53215]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:39:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:39:51 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.001s CPU time.
Jan 27 18:39:51 compute-0 systemd[1]: run-r65df6e5fb8284a6995b3cf3d2077a559.service: Deactivated successfully.
Jan 27 18:39:52 compute-0 sudo[54312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxfxpgucincausgcfawwxirxawfvcfgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539191.495784-79-73473178394988/AnsiballZ_systemd.py'
Jan 27 18:39:52 compute-0 sudo[54312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:52 compute-0 python3.9[54314]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 18:39:52 compute-0 systemd[1]: Reloading.
Jan 27 18:39:52 compute-0 systemd-rc-local-generator[54345]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:39:52 compute-0 systemd-sysv-generator[54349]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:39:52 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 27 18:39:52 compute-0 chown[54356]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 27 18:39:53 compute-0 ovs-ctl[54361]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 27 18:39:53 compute-0 ovs-ctl[54361]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 27 18:39:53 compute-0 ovs-ctl[54361]: Starting ovsdb-server [  OK  ]
Jan 27 18:39:53 compute-0 ovs-vsctl[54410]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 27 18:39:53 compute-0 ovs-vsctl[54430]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d3b19c13-a2f4-422f-8fa1-01ce64dc0c58\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 27 18:39:53 compute-0 ovs-ctl[54361]: Configuring Open vSwitch system IDs [  OK  ]
Jan 27 18:39:53 compute-0 ovs-vsctl[54436]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 18:39:53 compute-0 ovs-ctl[54361]: Enabling remote OVSDB managers [  OK  ]
Jan 27 18:39:53 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 27 18:39:53 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 27 18:39:53 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 27 18:39:53 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 27 18:39:53 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 27 18:39:53 compute-0 ovs-ctl[54481]: Inserting openvswitch module [  OK  ]
Jan 27 18:39:53 compute-0 ovs-ctl[54449]: Starting ovs-vswitchd [  OK  ]
Jan 27 18:39:53 compute-0 ovs-vsctl[54501]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 18:39:53 compute-0 ovs-ctl[54449]: Enabling remote OVSDB managers [  OK  ]
Jan 27 18:39:53 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 27 18:39:53 compute-0 systemd[1]: Starting Open vSwitch...
Jan 27 18:39:53 compute-0 systemd[1]: Finished Open vSwitch.
Jan 27 18:39:53 compute-0 sudo[54312]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:54 compute-0 python3.9[54653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:39:55 compute-0 sudo[54803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axzgpvrdgihhputdfmcmcaulxumxzxaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539194.784472-97-245096521953393/AnsiballZ_sefcontext.py'
Jan 27 18:39:55 compute-0 sudo[54803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:55 compute-0 python3.9[54805]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 27 18:39:56 compute-0 kernel: SELinux:  Converting 2751 SID table entries...
Jan 27 18:39:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:39:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 18:39:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:39:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:39:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:39:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:39:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:39:56 compute-0 sudo[54803]: pam_unix(sudo:session): session closed for user root
Jan 27 18:39:57 compute-0 python3.9[54960]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:39:58 compute-0 sudo[55116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmvlvqdkvyuexkojlpniydqrzyyfmstw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539198.3638623-115-260619564290496/AnsiballZ_dnf.py'
Jan 27 18:39:58 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 27 18:39:58 compute-0 sudo[55116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:39:59 compute-0 python3.9[55118]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:40:00 compute-0 sudo[55116]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:01 compute-0 sudo[55269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdhgjrpanjlhnlztatmhtsozgavywyrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539200.5614543-123-6763789993647/AnsiballZ_command.py'
Jan 27 18:40:01 compute-0 sudo[55269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:01 compute-0 python3.9[55271]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:40:01 compute-0 sudo[55269]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:02 compute-0 sudo[55556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thtknorwcsgwqexvlutdwgwpfmeauhwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539202.1533957-131-25878791955538/AnsiballZ_file.py'
Jan 27 18:40:02 compute-0 sudo[55556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:02 compute-0 python3.9[55558]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 27 18:40:02 compute-0 sudo[55556]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:03 compute-0 python3.9[55708]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:40:04 compute-0 sudo[55860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzfyrfzusczvbbpmzgwhfffvawywpxls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539203.8146448-147-47690043284145/AnsiballZ_dnf.py'
Jan 27 18:40:04 compute-0 sudo[55860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:04 compute-0 python3.9[55862]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:40:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:40:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:40:07 compute-0 systemd[1]: Reloading.
Jan 27 18:40:07 compute-0 systemd-rc-local-generator[55901]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:40:07 compute-0 systemd-sysv-generator[55906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:40:08 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:40:08 compute-0 sudo[55860]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:40:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:40:08 compute-0 systemd[1]: run-r098fa383f9744284a7b2e1b1abd2526a.service: Deactivated successfully.
Jan 27 18:40:08 compute-0 sudo[56179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghnbzznjhlyhjizzkkdgjbdiubtmxtnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539208.6675494-155-84129737863067/AnsiballZ_systemd.py'
Jan 27 18:40:08 compute-0 sudo[56179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:09 compute-0 python3.9[56181]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:40:09 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 18:40:09 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 27 18:40:09 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 27 18:40:09 compute-0 systemd[1]: Stopping Network Manager...
Jan 27 18:40:09 compute-0 NetworkManager[7188]: <info>  [1769539209.3179] caught SIGTERM, shutting down normally.
Jan 27 18:40:09 compute-0 NetworkManager[7188]: <info>  [1769539209.3197] dhcp4 (eth0): canceled DHCP transaction
Jan 27 18:40:09 compute-0 NetworkManager[7188]: <info>  [1769539209.3197] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:40:09 compute-0 NetworkManager[7188]: <info>  [1769539209.3197] dhcp4 (eth0): state changed no lease
Jan 27 18:40:09 compute-0 NetworkManager[7188]: <info>  [1769539209.3200] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 18:40:09 compute-0 NetworkManager[7188]: <info>  [1769539209.3260] exiting (success)
Jan 27 18:40:09 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 18:40:09 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 18:40:09 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 18:40:09 compute-0 systemd[1]: Stopped Network Manager.
Jan 27 18:40:09 compute-0 systemd[1]: NetworkManager.service: Consumed 15.763s CPU time, 4.3M memory peak, read 0B from disk, written 11.0K to disk.
Jan 27 18:40:09 compute-0 systemd[1]: Starting Network Manager...
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.3868] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:89d8d250-28a2-43e9-80a8-3ccb353a2463)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.3870] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.3935] manager[0x55eb8272c000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 18:40:09 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 18:40:09 compute-0 systemd[1]: Started Hostname Service.
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4844] hostname: hostname: using hostnamed
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4847] hostname: static hostname changed from (none) to "compute-0"
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4852] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4857] manager[0x55eb8272c000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4857] manager[0x55eb8272c000]: rfkill: WWAN hardware radio set enabled
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4877] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4885] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4886] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4886] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4887] manager: Networking is enabled by state file
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4888] settings: Loaded settings plugin: keyfile (internal)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4891] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4914] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4923] dhcp: init: Using DHCP client 'internal'
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4926] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4931] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4935] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4941] device (lo): Activation: starting connection 'lo' (62ecf3fa-b7e2-49f7-a1e5-4df78c409860)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4947] device (eth0): carrier: link connected
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4950] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4954] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4954] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4960] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4965] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4970] device (eth1): carrier: link connected
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4973] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4977] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b3f7d2dc-0c1f-500c-bf63-a687d2e42193) (indicated)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4978] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4983] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4989] device (eth1): Activation: starting connection 'ci-private-network' (b3f7d2dc-0c1f-500c-bf63-a687d2e42193)
Jan 27 18:40:09 compute-0 systemd[1]: Started Network Manager.
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.4997] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5008] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5011] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5015] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5018] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5021] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5023] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5026] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5031] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5039] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5043] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5050] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5061] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5069] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5071] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5078] device (lo): Activation: successful, device activated.
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5088] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5095] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 18:40:09 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5169] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5177] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5179] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5183] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5186] device (eth1): Activation: successful, device activated.
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5209] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5211] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5215] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5218] device (eth0): Activation: successful, device activated.
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5223] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 18:40:09 compute-0 NetworkManager[56191]: <info>  [1769539209.5225] manager: startup complete
Jan 27 18:40:09 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 27 18:40:09 compute-0 sudo[56179]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:09 compute-0 sudo[56405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blzizcppwbpjkfzfnxhcutujxnfalnst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539209.7039719-163-263905828001466/AnsiballZ_dnf.py'
Jan 27 18:40:09 compute-0 sudo[56405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:10 compute-0 python3.9[56407]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:40:19 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 18:40:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:40:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:40:20 compute-0 systemd[1]: Reloading.
Jan 27 18:40:20 compute-0 systemd-rc-local-generator[56461]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:40:20 compute-0 systemd-sysv-generator[56464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:40:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:40:21 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:40:21 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:40:21 compute-0 systemd[1]: run-ref78049f45974bb282a3a244a1e7f5c3.service: Deactivated successfully.
Jan 27 18:40:21 compute-0 sudo[56405]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:21 compute-0 sudo[56865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npbanjphbgyvqobhxrpavvtpwjygyugo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539221.6549947-175-218172692321340/AnsiballZ_stat.py'
Jan 27 18:40:21 compute-0 sudo[56865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:22 compute-0 python3.9[56867]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:40:22 compute-0 sudo[56865]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:22 compute-0 sudo[57017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rushlthqffgqffcehensojhupfecjwax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539222.4317412-184-149506989857374/AnsiballZ_ini_file.py'
Jan 27 18:40:22 compute-0 sudo[57017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:23 compute-0 python3.9[57019]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:23 compute-0 sudo[57017]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:23 compute-0 sudo[57173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpkuylyeaubnsyevgupudrdeqpabcqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539223.4201128-194-18252739495519/AnsiballZ_ini_file.py'
Jan 27 18:40:23 compute-0 sudo[57173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:23 compute-0 sshd-session[57069]: Invalid user solana from 45.148.10.240 port 40840
Jan 27 18:40:23 compute-0 sshd-session[57069]: Connection closed by invalid user solana 45.148.10.240 port 40840 [preauth]
Jan 27 18:40:23 compute-0 python3.9[57175]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:23 compute-0 sudo[57173]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:24 compute-0 sudo[57325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djezgqndtnoifdeghahcikpfwoqgewip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539224.079859-194-152037000226095/AnsiballZ_ini_file.py'
Jan 27 18:40:24 compute-0 sudo[57325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:24 compute-0 python3.9[57327]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:24 compute-0 sudo[57325]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:25 compute-0 sudo[57477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhfvtatxajmtinqprquzpbcceyzvydw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539224.9570615-209-88025619709520/AnsiballZ_ini_file.py'
Jan 27 18:40:25 compute-0 sudo[57477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:25 compute-0 python3.9[57479]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:25 compute-0 sudo[57477]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:25 compute-0 sudo[57629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voeqhliphfrxtkzbllsrvclghvnslrdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539225.7030113-209-220529376504463/AnsiballZ_ini_file.py'
Jan 27 18:40:25 compute-0 sudo[57629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:26 compute-0 python3.9[57631]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:26 compute-0 sudo[57629]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:26 compute-0 sudo[57781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkvkeadgkqnlwixkofqbciiqzszpnkfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539226.3806498-224-34113748487283/AnsiballZ_stat.py'
Jan 27 18:40:26 compute-0 sudo[57781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:26 compute-0 python3.9[57783]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:40:26 compute-0 sudo[57781]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:27 compute-0 sudo[57904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atuxplavdpnqtfxnqrvztivlnynskqqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539226.3806498-224-34113748487283/AnsiballZ_copy.py'
Jan 27 18:40:27 compute-0 sudo[57904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:27 compute-0 python3.9[57906]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539226.3806498-224-34113748487283/.source _original_basename=.o9_cxb49 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:27 compute-0 sudo[57904]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:28 compute-0 sudo[58056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiojtrvlkxlmcbqoahtpekjrjdntanbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539227.8460736-239-224535166774203/AnsiballZ_file.py'
Jan 27 18:40:28 compute-0 sudo[58056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:28 compute-0 python3.9[58058]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:28 compute-0 sudo[58056]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:29 compute-0 sudo[58208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjytkaeupatbjkfmzzlwqhvenvhshyux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539228.5084198-247-104821916566045/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 27 18:40:29 compute-0 sudo[58208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:29 compute-0 python3.9[58210]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 27 18:40:29 compute-0 sudo[58208]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:29 compute-0 sudo[58360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyshooquamjqswpfyyhgzzdfcaizweur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539229.5802186-256-275580758844005/AnsiballZ_file.py'
Jan 27 18:40:29 compute-0 sudo[58360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:30 compute-0 python3.9[58362]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:30 compute-0 sudo[58360]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:30 compute-0 sudo[58512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsplxkthngngbggyisozdbbmrtnendxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539230.3702629-266-246273397455713/AnsiballZ_stat.py'
Jan 27 18:40:30 compute-0 sudo[58512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:30 compute-0 sudo[58512]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:31 compute-0 sudo[58635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqecultviddlrzavgkedibstfjornfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539230.3702629-266-246273397455713/AnsiballZ_copy.py'
Jan 27 18:40:31 compute-0 sudo[58635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:31 compute-0 sudo[58635]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:32 compute-0 sudo[58787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-petpclscjnlnfssvdgzxorkhlexspxyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539231.679869-281-21916797928586/AnsiballZ_slurp.py'
Jan 27 18:40:32 compute-0 sudo[58787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:32 compute-0 python3.9[58789]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 27 18:40:32 compute-0 sudo[58787]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:33 compute-0 sudo[58962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smjxdcxxlwqpfqncnlhbhgbjcoercogy ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539232.5755758-290-232025342657865/async_wrapper.py j350936886010 300 /home/zuul/.ansible/tmp/ansible-tmp-1769539232.5755758-290-232025342657865/AnsiballZ_edpm_os_net_config.py _'
Jan 27 18:40:33 compute-0 sudo[58962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:33 compute-0 ansible-async_wrapper.py[58964]: Invoked with j350936886010 300 /home/zuul/.ansible/tmp/ansible-tmp-1769539232.5755758-290-232025342657865/AnsiballZ_edpm_os_net_config.py _
Jan 27 18:40:33 compute-0 ansible-async_wrapper.py[58967]: Starting module and watcher
Jan 27 18:40:33 compute-0 ansible-async_wrapper.py[58967]: Start watching 58968 (300)
Jan 27 18:40:33 compute-0 ansible-async_wrapper.py[58968]: Start module (58968)
Jan 27 18:40:33 compute-0 ansible-async_wrapper.py[58964]: Return async_wrapper task started.
Jan 27 18:40:33 compute-0 sudo[58962]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:33 compute-0 python3.9[58969]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 27 18:40:34 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 27 18:40:34 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 27 18:40:34 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 27 18:40:34 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 27 18:40:34 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6142] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6170] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6793] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6794] audit: op="connection-add" uuid="29f7140f-0cad-47eb-9f4a-9f52a2a91300" name="br-ex-br" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6808] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6809] audit: op="connection-add" uuid="a56b30b4-1d9b-40fe-ab78-486e3c47441b" name="br-ex-port" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6819] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6820] audit: op="connection-add" uuid="90527b8a-e87d-4b92-b743-d47fa6ba6ba1" name="eth1-port" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6830] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6831] audit: op="connection-add" uuid="0a361251-5f00-4c5c-a904-225d45fa86ba" name="vlan20-port" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6841] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6842] audit: op="connection-add" uuid="545809b9-e542-4075-9b08-8465d066d16f" name="vlan21-port" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6851] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6852] audit: op="connection-add" uuid="8668d546-e898-4761-8b44-d8c01106555e" name="vlan22-port" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6869] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6884] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6885] audit: op="connection-add" uuid="63514ff1-09b6-472f-96c0-af3ecf5b5b42" name="br-ex-if" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6937] audit: op="connection-update" uuid="b3f7d2dc-0c1f-500c-bf63-a687d2e42193" name="ci-private-network" args="ovs-external-ids.data,connection.port-type,connection.controller,connection.slave-type,connection.master,connection.timestamp,ipv4.never-default,ipv4.dns,ipv4.addresses,ipv4.routes,ipv4.routing-rules,ipv4.method,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.routes,ipv6.routing-rules,ipv6.method,ovs-interface.type" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6954] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6955] audit: op="connection-add" uuid="ccb1a110-8f19-4322-aac4-3fa8ac7936c5" name="vlan20-if" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6968] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6969] audit: op="connection-add" uuid="bfe1a2ff-578d-40a5-b797-81702f9c28bc" name="vlan21-if" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6983] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6984] audit: op="connection-add" uuid="e3b06a4c-f361-4c87-a80b-5affa3283a8f" name="vlan22-if" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.6994] audit: op="connection-delete" uuid="4d7ab4fa-c1ce-3c57-a6d4-71cc23f6c63c" name="Wired connection 1" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7005] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7009] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7014] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7017] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (29f7140f-0cad-47eb-9f4a-9f52a2a91300)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7017] audit: op="connection-activate" uuid="29f7140f-0cad-47eb-9f4a-9f52a2a91300" name="br-ex-br" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7019] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7020] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7024] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7027] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a56b30b4-1d9b-40fe-ab78-486e3c47441b)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7029] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7030] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7033] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7036] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (90527b8a-e87d-4b92-b743-d47fa6ba6ba1)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7038] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7039] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7042] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7045] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (0a361251-5f00-4c5c-a904-225d45fa86ba)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7047] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7048] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7052] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7054] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (545809b9-e542-4075-9b08-8465d066d16f)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7056] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7057] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7061] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7064] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (8668d546-e898-4761-8b44-d8c01106555e)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7065] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7067] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7068] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7072] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7073] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7076] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7079] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (63514ff1-09b6-472f-96c0-af3ecf5b5b42)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7079] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7082] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7083] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7084] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7085] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7093] device (eth1): disconnecting for new activation request.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7094] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7097] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7098] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7099] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7100] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7101] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7103] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7107] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ccb1a110-8f19-4322-aac4-3fa8ac7936c5)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7108] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7112] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7114] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7115] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7118] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7119] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7123] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7127] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (bfe1a2ff-578d-40a5-b797-81702f9c28bc)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7128] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7131] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7133] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7134] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7137] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <warn>  [1769539235.7138] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7142] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7147] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e3b06a4c-f361-4c87-a80b-5affa3283a8f)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7148] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7151] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7153] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7155] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7157] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7169] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7171] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7174] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7177] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7183] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7187] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7191] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7208] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7211] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7219] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7225] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7231] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7233] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 systemd-udevd[58975]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 18:40:35 compute-0 kernel: Timeout policy base is empty
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7238] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7243] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7247] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7249] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7256] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7262] dhcp4 (eth0): canceled DHCP transaction
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7263] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7263] dhcp4 (eth0): state changed no lease
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7265] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7277] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7282] audit: op="device-reapply" interface="eth1" ifindex=3 pid=58970 uid=0 result="fail" reason="Device is not activated"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7287] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7296] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 27 18:40:35 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7399] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7403] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Jan 27 18:40:35 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7480] device (eth1): disconnecting for new activation request.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7481] audit: op="connection-activate" uuid="b3f7d2dc-0c1f-500c-bf63-a687d2e42193" name="ci-private-network" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7487] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 27 18:40:35 compute-0 kernel: br-ex: entered promiscuous mode
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7635] device (eth1): Activation: starting connection 'ci-private-network' (b3f7d2dc-0c1f-500c-bf63-a687d2e42193)
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7644] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7676] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7681] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7689] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7693] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7706] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7710] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7712] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7715] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 kernel: vlan22: entered promiscuous mode
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7718] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7720] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58970 uid=0 result="success"
Jan 27 18:40:35 compute-0 systemd-udevd[58974]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7733] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7742] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7747] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7752] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7757] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7763] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7768] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7774] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 kernel: vlan20: entered promiscuous mode
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7780] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7787] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 systemd-udevd[58976]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7792] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7805] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7810] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7846] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7853] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7859] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7864] device (eth1): Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7877] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7879] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 kernel: vlan21: entered promiscuous mode
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7929] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7932] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7940] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7945] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7961] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7961] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7963] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7966] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.7977] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.8029] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.8034] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.8038] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.8044] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.8067] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.8100] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.8105] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 18:40:35 compute-0 NetworkManager[56191]: <info>  [1769539235.8111] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 18:40:36 compute-0 NetworkManager[56191]: <info>  [1769539236.9506] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58970 uid=0 result="success"
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.1348] checkpoint[0x55eb82702950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.1353] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58970 uid=0 result="success"
Jan 27 18:40:37 compute-0 sudo[59303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imusrrabjmostljiybtzoxcnkutuufzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539236.8229547-290-230750255401476/AnsiballZ_async_status.py'
Jan 27 18:40:37 compute-0 sudo[59303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.4364] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58970 uid=0 result="success"
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.4378] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58970 uid=0 result="success"
Jan 27 18:40:37 compute-0 python3.9[59305]: ansible-ansible.legacy.async_status Invoked with jid=j350936886010.58964 mode=status _async_dir=/root/.ansible_async
Jan 27 18:40:37 compute-0 sudo[59303]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.6304] audit: op="networking-control" arg="global-dns-configuration" pid=58970 uid=0 result="success"
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.6333] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.6361] audit: op="networking-control" arg="global-dns-configuration" pid=58970 uid=0 result="success"
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.6378] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58970 uid=0 result="success"
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.7675] checkpoint[0x55eb82702a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 27 18:40:37 compute-0 NetworkManager[56191]: <info>  [1769539237.7682] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58970 uid=0 result="success"
Jan 27 18:40:37 compute-0 ansible-async_wrapper.py[58968]: Module complete (58968)
Jan 27 18:40:38 compute-0 ansible-async_wrapper.py[58967]: Done in kid B.
Jan 27 18:40:39 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 18:40:40 compute-0 sudo[59409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqrdurfsbnpcbrsnxlxyrwyarwxjxkom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539236.8229547-290-230750255401476/AnsiballZ_async_status.py'
Jan 27 18:40:40 compute-0 sudo[59409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:41 compute-0 python3.9[59411]: ansible-ansible.legacy.async_status Invoked with jid=j350936886010.58964 mode=status _async_dir=/root/.ansible_async
Jan 27 18:40:41 compute-0 sudo[59409]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:41 compute-0 sudo[59509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlnpwsdbrciurwywnvlzufudafruyizx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539236.8229547-290-230750255401476/AnsiballZ_async_status.py'
Jan 27 18:40:41 compute-0 sudo[59509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:41 compute-0 python3.9[59511]: ansible-ansible.legacy.async_status Invoked with jid=j350936886010.58964 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 18:40:41 compute-0 sudo[59509]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:42 compute-0 sudo[59661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvknuafixrmcuhwnrwrztuozezzwbngu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539241.948086-317-169854218539199/AnsiballZ_stat.py'
Jan 27 18:40:42 compute-0 sudo[59661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:42 compute-0 python3.9[59663]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:40:42 compute-0 sudo[59661]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:42 compute-0 sudo[59784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtnjayeyqxfmqoapgmtjmxdsiyvrmzju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539241.948086-317-169854218539199/AnsiballZ_copy.py'
Jan 27 18:40:42 compute-0 sudo[59784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:43 compute-0 python3.9[59786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539241.948086-317-169854218539199/.source.returncode _original_basename=.7dmiw3vu follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:43 compute-0 sudo[59784]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:43 compute-0 sudo[59936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujmugosfhzabucxntsgubjozsrgqsed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539243.372078-333-206919223444534/AnsiballZ_stat.py'
Jan 27 18:40:43 compute-0 sudo[59936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:43 compute-0 python3.9[59938]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:40:43 compute-0 sudo[59936]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:44 compute-0 sudo[60060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdxlwezvhxvjtnvunuzidsxyhqduhpge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539243.372078-333-206919223444534/AnsiballZ_copy.py'
Jan 27 18:40:44 compute-0 sudo[60060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:44 compute-0 python3.9[60062]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539243.372078-333-206919223444534/.source.cfg _original_basename=.zo7ufzs0 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:40:44 compute-0 sudo[60060]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:44 compute-0 sudo[60212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjkjnaojouvjjpnppgoxndikyanjbab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539244.6627722-348-4148909971001/AnsiballZ_systemd.py'
Jan 27 18:40:44 compute-0 sudo[60212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:40:45 compute-0 python3.9[60214]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:40:46 compute-0 systemd[1]: Reloading Network Manager...
Jan 27 18:40:46 compute-0 NetworkManager[56191]: <info>  [1769539246.3589] audit: op="reload" arg="0" pid=60218 uid=0 result="success"
Jan 27 18:40:46 compute-0 NetworkManager[56191]: <info>  [1769539246.3600] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 27 18:40:46 compute-0 systemd[1]: Reloaded Network Manager.
Jan 27 18:40:46 compute-0 sudo[60212]: pam_unix(sudo:session): session closed for user root
Jan 27 18:40:46 compute-0 sshd-session[52194]: Connection closed by 192.168.122.31 port 35488
Jan 27 18:40:46 compute-0 sshd-session[52191]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:40:46 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 27 18:40:46 compute-0 systemd[1]: session-11.scope: Consumed 54.066s CPU time.
Jan 27 18:40:46 compute-0 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Jan 27 18:40:46 compute-0 systemd-logind[795]: Removed session 11.
Jan 27 18:40:52 compute-0 sshd-session[60250]: Accepted publickey for zuul from 192.168.122.31 port 55720 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:40:52 compute-0 systemd-logind[795]: New session 12 of user zuul.
Jan 27 18:40:52 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 27 18:40:52 compute-0 sshd-session[60250]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:40:53 compute-0 python3.9[60403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:40:54 compute-0 python3.9[60558]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:40:56 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 18:40:56 compute-0 python3.9[60748]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:40:57 compute-0 sshd-session[60253]: Connection closed by 192.168.122.31 port 55720
Jan 27 18:40:57 compute-0 sshd-session[60250]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:40:57 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 27 18:40:57 compute-0 systemd[1]: session-12.scope: Consumed 2.604s CPU time.
Jan 27 18:40:57 compute-0 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Jan 27 18:40:57 compute-0 systemd-logind[795]: Removed session 12.
Jan 27 18:41:02 compute-0 sshd-session[60776]: Accepted publickey for zuul from 192.168.122.31 port 34462 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:41:02 compute-0 systemd-logind[795]: New session 13 of user zuul.
Jan 27 18:41:02 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 27 18:41:02 compute-0 sshd-session[60776]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:41:03 compute-0 python3.9[60930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:41:05 compute-0 python3.9[61084]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:41:05 compute-0 sudo[61238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuomdmevffzsujdgwyczbbdslqewborj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539265.473115-35-44562378961810/AnsiballZ_setup.py'
Jan 27 18:41:05 compute-0 sudo[61238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:06 compute-0 python3.9[61240]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:41:06 compute-0 sudo[61238]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:07 compute-0 sudo[61323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhyepdshiribumsltxjiwwyutcbxgqjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539265.473115-35-44562378961810/AnsiballZ_dnf.py'
Jan 27 18:41:07 compute-0 sudo[61323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:07 compute-0 python3.9[61325]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:41:08 compute-0 sudo[61323]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:09 compute-0 sudo[61476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnmbwqvnisuhhqdetfakxccfypzlwewl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539268.891353-47-84608473944206/AnsiballZ_setup.py'
Jan 27 18:41:09 compute-0 sudo[61476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:09 compute-0 python3.9[61478]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:41:09 compute-0 sudo[61476]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:10 compute-0 sudo[61667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iascqakfpxjzkcnxrxqwvrmpgcqmorfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539270.0921066-58-25829113661289/AnsiballZ_file.py'
Jan 27 18:41:10 compute-0 sudo[61667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:10 compute-0 python3.9[61669]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:10 compute-0 sudo[61667]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:11 compute-0 sudo[61819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdpithvujlomdaaknxacejcrsetvmscq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539271.027758-66-195246336161166/AnsiballZ_command.py'
Jan 27 18:41:11 compute-0 sudo[61819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:11 compute-0 python3.9[61821]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:41:11 compute-0 sudo[61819]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:12 compute-0 sudo[61981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgollixuznqgsbpatuxpukzqehdvlept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539272.0498483-74-236947996039276/AnsiballZ_stat.py'
Jan 27 18:41:12 compute-0 sudo[61981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:12 compute-0 python3.9[61983]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:12 compute-0 sudo[61981]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:12 compute-0 sudo[62059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbswggpcpicytzwoyqdseqfflnwhxfvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539272.0498483-74-236947996039276/AnsiballZ_file.py'
Jan 27 18:41:13 compute-0 sudo[62059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:13 compute-0 python3.9[62061]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:13 compute-0 sudo[62059]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:13 compute-0 sudo[62211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-citwkajbgcmhdgytowmznbmwssufnlfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539273.3774006-86-110438675172593/AnsiballZ_stat.py'
Jan 27 18:41:13 compute-0 sudo[62211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:13 compute-0 python3.9[62213]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:14 compute-0 sudo[62211]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:14 compute-0 sudo[62289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzlybbfiqjwrohwowxdzkdqdziqxzpwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539273.3774006-86-110438675172593/AnsiballZ_file.py'
Jan 27 18:41:14 compute-0 sudo[62289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:14 compute-0 python3.9[62291]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:41:14 compute-0 sudo[62289]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:15 compute-0 sudo[62441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrlwzmfffivvjoddaeyfaibdkxvymmfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539274.7197816-99-204659771026358/AnsiballZ_ini_file.py'
Jan 27 18:41:15 compute-0 sudo[62441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:15 compute-0 python3.9[62443]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:41:15 compute-0 sudo[62441]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:15 compute-0 sudo[62593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzahznujlaskivhprkgrccovqtcyymd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539275.670848-99-94227400238394/AnsiballZ_ini_file.py'
Jan 27 18:41:16 compute-0 sudo[62593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:16 compute-0 python3.9[62595]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:41:16 compute-0 sudo[62593]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:16 compute-0 sudo[62745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxrbwxlxxqrxitgszktpibaawptkdwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539276.4269023-99-147153242880621/AnsiballZ_ini_file.py'
Jan 27 18:41:16 compute-0 sudo[62745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:17 compute-0 python3.9[62747]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:41:17 compute-0 sudo[62745]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:17 compute-0 sudo[62897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucwsiwrkdopgcvxhpsdesvelhsmlqwhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539277.179419-99-32870537838374/AnsiballZ_ini_file.py'
Jan 27 18:41:17 compute-0 sudo[62897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:17 compute-0 python3.9[62899]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:41:17 compute-0 sudo[62897]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:18 compute-0 sudo[63049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkpjxkmjhohdghghazgdjfalfvcqozzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539278.1321-130-162578154111501/AnsiballZ_dnf.py'
Jan 27 18:41:18 compute-0 sudo[63049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:18 compute-0 python3.9[63051]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:41:19 compute-0 sudo[63049]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:20 compute-0 sudo[63202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ualoxmnmekougungusyknrndactcevoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539280.3127754-141-205831418645372/AnsiballZ_setup.py'
Jan 27 18:41:20 compute-0 sudo[63202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:21 compute-0 python3.9[63204]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:41:21 compute-0 sudo[63202]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:21 compute-0 sudo[63356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbelxxbbvgxxhnzsypnyroxxhtlmsfjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539281.3669796-149-109186864907463/AnsiballZ_stat.py'
Jan 27 18:41:21 compute-0 sudo[63356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:21 compute-0 python3.9[63358]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:41:21 compute-0 sudo[63356]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:22 compute-0 sudo[63508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxajijxdrluokgiqozffshdxumxgguoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539282.0858269-158-15142622651948/AnsiballZ_stat.py'
Jan 27 18:41:22 compute-0 sudo[63508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:22 compute-0 python3.9[63510]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:41:22 compute-0 sudo[63508]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:23 compute-0 sudo[63660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjyfyipitsbyohxvyrtabqnooygbyzsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539282.8860295-168-24991362302045/AnsiballZ_command.py'
Jan 27 18:41:23 compute-0 sudo[63660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:23 compute-0 python3.9[63662]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:41:23 compute-0 sudo[63660]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:24 compute-0 sudo[63813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdfqfuihbeldcsabsfevtwfxzhiydpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539283.7240636-178-50439344086435/AnsiballZ_service_facts.py'
Jan 27 18:41:24 compute-0 sudo[63813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:24 compute-0 python3.9[63815]: ansible-service_facts Invoked
Jan 27 18:41:24 compute-0 network[63832]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 18:41:24 compute-0 network[63833]: 'network-scripts' will be removed from distribution in near future.
Jan 27 18:41:24 compute-0 network[63834]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 18:41:27 compute-0 sudo[63813]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:28 compute-0 sudo[64117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqixumupsipdnzdznioegatqiiywbnjj ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769539288.35018-193-11947166948555/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769539288.35018-193-11947166948555/args'
Jan 27 18:41:28 compute-0 sudo[64117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:28 compute-0 sudo[64117]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:29 compute-0 sudo[64284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxfibhpvulalldfhhmgbuhpwsqdduhty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539289.0353577-204-219376523329430/AnsiballZ_dnf.py'
Jan 27 18:41:29 compute-0 sudo[64284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:29 compute-0 python3.9[64286]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:41:30 compute-0 sudo[64284]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:31 compute-0 sudo[64437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmrnylbzgjosqcpzgbduarcvsqdjditk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539291.3024108-217-24720483107173/AnsiballZ_package_facts.py'
Jan 27 18:41:31 compute-0 sudo[64437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:32 compute-0 python3.9[64439]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 27 18:41:32 compute-0 sudo[64437]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:33 compute-0 sudo[64589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muraovjqsfejojdyecqqoazjspvhjibh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539292.8445406-227-259421348612312/AnsiballZ_stat.py'
Jan 27 18:41:33 compute-0 sudo[64589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:33 compute-0 python3.9[64591]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:33 compute-0 sudo[64589]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:34 compute-0 sudo[64714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcffqoabxmtwxmixbunkzakkidujwmeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539292.8445406-227-259421348612312/AnsiballZ_copy.py'
Jan 27 18:41:34 compute-0 sudo[64714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:34 compute-0 python3.9[64716]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539292.8445406-227-259421348612312/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:34 compute-0 sudo[64714]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:34 compute-0 sudo[64868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujszezprhptvpjbehagkokrroqikquk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539294.5077786-242-15406902982702/AnsiballZ_stat.py'
Jan 27 18:41:34 compute-0 sudo[64868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:34 compute-0 python3.9[64870]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:35 compute-0 sudo[64868]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:35 compute-0 sudo[64993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gclbjjgmsxtuwcpbwkqbwjmpnfuanlqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539294.5077786-242-15406902982702/AnsiballZ_copy.py'
Jan 27 18:41:35 compute-0 sudo[64993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:35 compute-0 python3.9[64995]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539294.5077786-242-15406902982702/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:35 compute-0 sudo[64993]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:36 compute-0 sudo[65147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huvnvifjleoeyxunnqlwsjlrzuxicicu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539296.1402378-263-84458062396501/AnsiballZ_lineinfile.py'
Jan 27 18:41:36 compute-0 sudo[65147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:36 compute-0 python3.9[65149]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:36 compute-0 sudo[65147]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:37 compute-0 sudo[65301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpqkchtqnkfutlgfsujfksailusfjvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539297.4490278-278-218314522959883/AnsiballZ_setup.py'
Jan 27 18:41:37 compute-0 sudo[65301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:38 compute-0 python3.9[65303]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:41:38 compute-0 sudo[65301]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:38 compute-0 sudo[65385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsdjgozlswebaflpsyiuzmczhiygnqlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539297.4490278-278-218314522959883/AnsiballZ_systemd.py'
Jan 27 18:41:38 compute-0 sudo[65385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:39 compute-0 python3.9[65387]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:41:39 compute-0 sudo[65385]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:39 compute-0 sudo[65539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfmjyrsidhjjwwheqxlyuanepvwffgwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539299.7042181-294-203573274584956/AnsiballZ_setup.py'
Jan 27 18:41:39 compute-0 sudo[65539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:40 compute-0 python3.9[65541]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:41:40 compute-0 sudo[65539]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:40 compute-0 sudo[65623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzeoxdfmnsyrokcfnzxbressicbylqxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539299.7042181-294-203573274584956/AnsiballZ_systemd.py'
Jan 27 18:41:40 compute-0 sudo[65623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:41 compute-0 python3.9[65625]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:41:41 compute-0 chronyd[803]: chronyd exiting
Jan 27 18:41:41 compute-0 systemd[1]: Stopping NTP client/server...
Jan 27 18:41:41 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 27 18:41:41 compute-0 systemd[1]: Stopped NTP client/server.
Jan 27 18:41:41 compute-0 systemd[1]: Starting NTP client/server...
Jan 27 18:41:41 compute-0 chronyd[65633]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 18:41:41 compute-0 chronyd[65633]: Frequency -23.495 +/- 0.193 ppm read from /var/lib/chrony/drift
Jan 27 18:41:41 compute-0 chronyd[65633]: Loaded seccomp filter (level 2)
Jan 27 18:41:41 compute-0 systemd[1]: Started NTP client/server.
Jan 27 18:41:41 compute-0 sudo[65623]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:41 compute-0 sshd-session[60779]: Connection closed by 192.168.122.31 port 34462
Jan 27 18:41:41 compute-0 sshd-session[60776]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:41:41 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 27 18:41:41 compute-0 systemd[1]: session-13.scope: Consumed 28.172s CPU time.
Jan 27 18:41:41 compute-0 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Jan 27 18:41:41 compute-0 systemd-logind[795]: Removed session 13.
Jan 27 18:41:47 compute-0 sshd-session[65659]: Accepted publickey for zuul from 192.168.122.31 port 58858 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:41:47 compute-0 systemd-logind[795]: New session 14 of user zuul.
Jan 27 18:41:47 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 27 18:41:47 compute-0 sshd-session[65659]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:41:48 compute-0 python3.9[65812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:41:49 compute-0 sudo[65966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toejfdsnrmufhvognnnaozakjbwodwlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539309.0140762-28-208147206430183/AnsiballZ_file.py'
Jan 27 18:41:49 compute-0 sudo[65966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:49 compute-0 python3.9[65968]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:49 compute-0 sudo[65966]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:50 compute-0 sudo[66141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhgdfjcdkvbjidutqxwwtgxwudzdlox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539309.8945282-36-85026496439204/AnsiballZ_stat.py'
Jan 27 18:41:50 compute-0 sudo[66141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:50 compute-0 python3.9[66143]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:50 compute-0 sudo[66141]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:50 compute-0 sudo[66219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjkqgkpgacodnszqlmrpxmlhavordxcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539309.8945282-36-85026496439204/AnsiballZ_file.py'
Jan 27 18:41:50 compute-0 sudo[66219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:51 compute-0 python3.9[66221]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.kxucicvj recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:51 compute-0 sudo[66219]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:51 compute-0 sudo[66371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqktbpzsvwglofezuqaxgmbhcpgprsur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539311.5023603-56-4119816977242/AnsiballZ_stat.py'
Jan 27 18:41:51 compute-0 sudo[66371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:51 compute-0 python3.9[66373]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:51 compute-0 sudo[66371]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:52 compute-0 sudo[66494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vafqtoschytlhglkhhyqzfwxpsrwztmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539311.5023603-56-4119816977242/AnsiballZ_copy.py'
Jan 27 18:41:52 compute-0 sudo[66494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:52 compute-0 python3.9[66496]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539311.5023603-56-4119816977242/.source _original_basename=.rf73dtd8 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:52 compute-0 sudo[66494]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:53 compute-0 sudo[66646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvfwenurdwrtppzffsxxmkrmiybtyvjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539312.9274185-72-239427286048462/AnsiballZ_file.py'
Jan 27 18:41:53 compute-0 sudo[66646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:53 compute-0 python3.9[66648]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:41:53 compute-0 sudo[66646]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:53 compute-0 sudo[66798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvgrbrxgnrdutuqahqdhmaltwikvepfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539313.675461-80-160138817529027/AnsiballZ_stat.py'
Jan 27 18:41:53 compute-0 sudo[66798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:54 compute-0 python3.9[66800]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:54 compute-0 sudo[66798]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:54 compute-0 sudo[66921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogivixrfgqpiuoqdjsourikstkgntjvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539313.675461-80-160138817529027/AnsiballZ_copy.py'
Jan 27 18:41:54 compute-0 sudo[66921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:54 compute-0 python3.9[66923]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539313.675461-80-160138817529027/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:41:54 compute-0 sudo[66921]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:55 compute-0 sudo[67073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbnvfgbivnyagybirjpwzqooizbpogae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539314.9062176-80-186368981291146/AnsiballZ_stat.py'
Jan 27 18:41:55 compute-0 sudo[67073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:55 compute-0 python3.9[67075]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:55 compute-0 sudo[67073]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:55 compute-0 sudo[67196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazswtcdfssygaahamhliskikybyuduo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539314.9062176-80-186368981291146/AnsiballZ_copy.py'
Jan 27 18:41:55 compute-0 sudo[67196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:55 compute-0 python3.9[67198]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539314.9062176-80-186368981291146/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:41:56 compute-0 sudo[67196]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:56 compute-0 sudo[67348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veraoyljbwflnsgeuyiapkbwwerqhbom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539316.1550956-109-21308732023511/AnsiballZ_file.py'
Jan 27 18:41:56 compute-0 sudo[67348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:56 compute-0 python3.9[67350]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:56 compute-0 sudo[67348]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:57 compute-0 sudo[67500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-splogjlkymdqwjbyakomtxkicuojsrui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539316.953688-117-137157280226674/AnsiballZ_stat.py'
Jan 27 18:41:57 compute-0 sudo[67500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:57 compute-0 python3.9[67502]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:57 compute-0 sudo[67500]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:57 compute-0 sudo[67623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akektklzkkrtjcsetnfcixvmnudbrwwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539316.953688-117-137157280226674/AnsiballZ_copy.py'
Jan 27 18:41:57 compute-0 sudo[67623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:57 compute-0 python3.9[67625]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539316.953688-117-137157280226674/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:58 compute-0 sudo[67623]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:58 compute-0 sudo[67775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgadphqytqxccioxrtqixsodxggtlcwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539318.184048-132-67464496225002/AnsiballZ_stat.py'
Jan 27 18:41:58 compute-0 sudo[67775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:58 compute-0 python3.9[67777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:41:58 compute-0 sudo[67775]: pam_unix(sudo:session): session closed for user root
Jan 27 18:41:59 compute-0 sudo[67898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccczgspueyzgmfngoqqzxyywjmisfoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539318.184048-132-67464496225002/AnsiballZ_copy.py'
Jan 27 18:41:59 compute-0 sudo[67898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:41:59 compute-0 python3.9[67900]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539318.184048-132-67464496225002/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:41:59 compute-0 sudo[67898]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:00 compute-0 sudo[68050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njcjejsrbseqwwxpwcmlbwtfflemcnao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539319.3770106-147-146143667778981/AnsiballZ_systemd.py'
Jan 27 18:42:00 compute-0 sudo[68050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:00 compute-0 python3.9[68052]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:42:00 compute-0 systemd[1]: Reloading.
Jan 27 18:42:00 compute-0 systemd-sysv-generator[68081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:42:00 compute-0 systemd-rc-local-generator[68078]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:42:00 compute-0 systemd[1]: Reloading.
Jan 27 18:42:00 compute-0 systemd-rc-local-generator[68120]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:42:00 compute-0 systemd-sysv-generator[68123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:42:00 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 27 18:42:00 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 27 18:42:00 compute-0 sudo[68050]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:01 compute-0 sudo[68277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tupjqfbdjecowobhqcudqpujqchsmfui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539321.1300435-155-268189118744070/AnsiballZ_stat.py'
Jan 27 18:42:01 compute-0 sudo[68277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:01 compute-0 python3.9[68279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:01 compute-0 sudo[68277]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:02 compute-0 sudo[68400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvqulykkbzwyycdxcrooqtrkyxfgnqcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539321.1300435-155-268189118744070/AnsiballZ_copy.py'
Jan 27 18:42:02 compute-0 sudo[68400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:02 compute-0 python3.9[68402]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539321.1300435-155-268189118744070/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:02 compute-0 sudo[68400]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:02 compute-0 sudo[68552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuzdyxpayfxmfmzzdhuhmyddejcnlrfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539322.4814858-170-236659833298691/AnsiballZ_stat.py'
Jan 27 18:42:02 compute-0 sudo[68552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:02 compute-0 python3.9[68554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:03 compute-0 sudo[68552]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:03 compute-0 sudo[68675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlvqokosgolqpzmkbiibicmnmehuvdgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539322.4814858-170-236659833298691/AnsiballZ_copy.py'
Jan 27 18:42:03 compute-0 sudo[68675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:03 compute-0 python3.9[68677]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539322.4814858-170-236659833298691/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:03 compute-0 sudo[68675]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:04 compute-0 sudo[68827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbpllifuzdjtliaedrjucyzjzzsxhvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539323.808522-185-179183840848576/AnsiballZ_systemd.py'
Jan 27 18:42:04 compute-0 sudo[68827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:04 compute-0 python3.9[68829]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:42:04 compute-0 systemd[1]: Reloading.
Jan 27 18:42:04 compute-0 systemd-rc-local-generator[68858]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:42:04 compute-0 systemd-sysv-generator[68862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:42:04 compute-0 systemd[1]: Reloading.
Jan 27 18:42:04 compute-0 systemd-sysv-generator[68898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:42:04 compute-0 systemd-rc-local-generator[68893]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:42:04 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 18:42:04 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 18:42:04 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 18:42:04 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 18:42:04 compute-0 sudo[68827]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:05 compute-0 python3.9[69056]: ansible-ansible.builtin.service_facts Invoked
Jan 27 18:42:05 compute-0 network[69073]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 18:42:05 compute-0 network[69074]: 'network-scripts' will be removed from distribution in near future.
Jan 27 18:42:05 compute-0 network[69075]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 18:42:10 compute-0 sudo[69335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibvnhnqvvdwabssxlnvwarbrcarijpkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539330.1470122-201-241747713888850/AnsiballZ_systemd.py'
Jan 27 18:42:10 compute-0 sudo[69335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:10 compute-0 python3.9[69337]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:42:10 compute-0 systemd[1]: Reloading.
Jan 27 18:42:11 compute-0 systemd-rc-local-generator[69368]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:42:11 compute-0 systemd-sysv-generator[69371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:42:11 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 27 18:42:11 compute-0 iptables.init[69378]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 27 18:42:11 compute-0 iptables.init[69378]: iptables: Flushing firewall rules: [  OK  ]
Jan 27 18:42:11 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 27 18:42:11 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 27 18:42:11 compute-0 sudo[69335]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:12 compute-0 sudo[69572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qibyqapqvnzkzfjfahecdueanlsrpjeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539331.7530122-201-269256144571135/AnsiballZ_systemd.py'
Jan 27 18:42:12 compute-0 sudo[69572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:12 compute-0 python3.9[69574]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:42:12 compute-0 sudo[69572]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:13 compute-0 sudo[69726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyeabkyjbtojlwqwzyvcgilzxpvfmcsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539332.6885018-217-69577986637915/AnsiballZ_systemd.py'
Jan 27 18:42:13 compute-0 sudo[69726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:13 compute-0 python3.9[69728]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:42:13 compute-0 systemd[1]: Reloading.
Jan 27 18:42:13 compute-0 systemd-sysv-generator[69762]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:42:13 compute-0 systemd-rc-local-generator[69758]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:42:13 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 27 18:42:13 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 27 18:42:13 compute-0 sudo[69726]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:14 compute-0 sudo[69918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nobxfzoyxhsqtggrpdsbgovagauxouom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539333.8642905-225-280531499450414/AnsiballZ_command.py'
Jan 27 18:42:14 compute-0 sudo[69918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:14 compute-0 python3.9[69920]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:42:14 compute-0 sudo[69918]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:15 compute-0 sudo[70071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdusqkcmylgnbcczwiznpobghpxgqcxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539335.1833901-239-205218625548537/AnsiballZ_stat.py'
Jan 27 18:42:15 compute-0 sudo[70071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:15 compute-0 python3.9[70073]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:15 compute-0 sudo[70071]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:16 compute-0 sudo[70196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvialszztedwcgcaxwqfiwbsvajowsow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539335.1833901-239-205218625548537/AnsiballZ_copy.py'
Jan 27 18:42:16 compute-0 sudo[70196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:16 compute-0 python3.9[70198]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539335.1833901-239-205218625548537/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:16 compute-0 sudo[70196]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:16 compute-0 sudo[70349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlximxwupollwtsjqjnofrfgsephirke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539336.5642004-254-76108782799048/AnsiballZ_systemd.py'
Jan 27 18:42:16 compute-0 sudo[70349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:17 compute-0 python3.9[70351]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:42:17 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 27 18:42:17 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 27 18:42:17 compute-0 sshd[1007]: Received SIGHUP; restarting.
Jan 27 18:42:17 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 27 18:42:17 compute-0 sshd[1007]: Server listening on :: port 22.
Jan 27 18:42:17 compute-0 sudo[70349]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:17 compute-0 sudo[70505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sndtdcytiwknefdbhuwpplfnpbvkvnsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539337.473897-262-115974720468887/AnsiballZ_file.py'
Jan 27 18:42:17 compute-0 sudo[70505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:17 compute-0 python3.9[70507]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:17 compute-0 sudo[70505]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:18 compute-0 sudo[70657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iypwcaxaajhftocqfjfwbknbfqsxuhym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539338.1384082-270-18168122270456/AnsiballZ_stat.py'
Jan 27 18:42:18 compute-0 sudo[70657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:18 compute-0 python3.9[70659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:18 compute-0 sudo[70657]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:19 compute-0 sudo[70780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azzbdqaupqufwwzxvwuxqbruunnwixmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539338.1384082-270-18168122270456/AnsiballZ_copy.py'
Jan 27 18:42:19 compute-0 sudo[70780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:19 compute-0 python3.9[70782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539338.1384082-270-18168122270456/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:19 compute-0 sudo[70780]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:20 compute-0 sudo[70932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpqyesjbolducdmnkdozddctpknfxnjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539339.5821376-288-222222039037482/AnsiballZ_timezone.py'
Jan 27 18:42:20 compute-0 sudo[70932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:20 compute-0 python3.9[70934]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 18:42:20 compute-0 systemd[1]: Starting Time & Date Service...
Jan 27 18:42:20 compute-0 systemd[1]: Started Time & Date Service.
Jan 27 18:42:20 compute-0 sudo[70932]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:21 compute-0 sudo[71088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjvifngqbybosrfjfqjlmidwflejrlsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539341.1663945-297-66269870785451/AnsiballZ_file.py'
Jan 27 18:42:21 compute-0 sudo[71088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:21 compute-0 python3.9[71090]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:21 compute-0 sudo[71088]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:22 compute-0 sudo[71240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hywntbsifeplgvzerrjtvxuntgmvciqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539341.9057078-305-115692009660520/AnsiballZ_stat.py'
Jan 27 18:42:22 compute-0 sudo[71240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:22 compute-0 python3.9[71242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:22 compute-0 sudo[71240]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:22 compute-0 sudo[71363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuhvkggnagvjpzulijmxvlqjfefbljbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539341.9057078-305-115692009660520/AnsiballZ_copy.py'
Jan 27 18:42:22 compute-0 sudo[71363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:23 compute-0 python3.9[71365]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539341.9057078-305-115692009660520/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:23 compute-0 sudo[71363]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:23 compute-0 sudo[71515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmsqexpqxqwvhtqlalncuctdjdbhfnpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539343.2739894-320-271244833793719/AnsiballZ_stat.py'
Jan 27 18:42:23 compute-0 sudo[71515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:23 compute-0 python3.9[71517]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:23 compute-0 sudo[71515]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:24 compute-0 sudo[71638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvkcekddqcwyojwxiirazwjlwldssfdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539343.2739894-320-271244833793719/AnsiballZ_copy.py'
Jan 27 18:42:24 compute-0 sudo[71638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:24 compute-0 python3.9[71640]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539343.2739894-320-271244833793719/.source.yaml _original_basename=.fjd6guy7 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:24 compute-0 sudo[71638]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:24 compute-0 sudo[71790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vngigyvvypsdhvfofygiypakufemmvgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539344.5243695-335-176510322111127/AnsiballZ_stat.py'
Jan 27 18:42:24 compute-0 sudo[71790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:25 compute-0 python3.9[71792]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:25 compute-0 sudo[71790]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:25 compute-0 sudo[71913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sllpkindqfixpyftbkwjtmbytwvrvuyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539344.5243695-335-176510322111127/AnsiballZ_copy.py'
Jan 27 18:42:25 compute-0 sudo[71913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:25 compute-0 python3.9[71915]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539344.5243695-335-176510322111127/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:25 compute-0 sudo[71913]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:26 compute-0 sudo[72065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-josotdtdakokxrbxvzhwauojazmopufk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539345.9105508-350-128695346429836/AnsiballZ_command.py'
Jan 27 18:42:26 compute-0 sudo[72065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:26 compute-0 python3.9[72067]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:42:26 compute-0 sudo[72065]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:26 compute-0 sudo[72218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctnnboquvdghsmyrvdkotanvddpnrndh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539346.6835308-358-186720442946085/AnsiballZ_command.py'
Jan 27 18:42:26 compute-0 sudo[72218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:27 compute-0 python3.9[72220]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:42:27 compute-0 sudo[72218]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:27 compute-0 sudo[72371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeosuynwhgulmsvlyqcciebxzqslkufh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769539347.3714457-366-119056984181152/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 18:42:27 compute-0 sudo[72371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:27 compute-0 python3[72373]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 18:42:27 compute-0 sudo[72371]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:28 compute-0 sudo[72523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzoamcokvrvbdzbhxdfavexczguazuev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539348.1482012-374-64049568774559/AnsiballZ_stat.py'
Jan 27 18:42:28 compute-0 sudo[72523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:28 compute-0 python3.9[72525]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:28 compute-0 sudo[72523]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:29 compute-0 sudo[72646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywagffkuuwchinftutzsebvjkulvtbxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539348.1482012-374-64049568774559/AnsiballZ_copy.py'
Jan 27 18:42:29 compute-0 sudo[72646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:29 compute-0 python3.9[72648]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539348.1482012-374-64049568774559/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:29 compute-0 sudo[72646]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:29 compute-0 sudo[72798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erypnokkhhaxxwvggrxpamkhdehtprlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539349.4412215-389-74630781263722/AnsiballZ_stat.py'
Jan 27 18:42:29 compute-0 sudo[72798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:29 compute-0 python3.9[72800]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:29 compute-0 sudo[72798]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:30 compute-0 sudo[72921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utqgrjodfykiubkxqxlziozydpsggahw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539349.4412215-389-74630781263722/AnsiballZ_copy.py'
Jan 27 18:42:30 compute-0 sudo[72921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:30 compute-0 python3.9[72923]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539349.4412215-389-74630781263722/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:30 compute-0 sudo[72921]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:31 compute-0 sudo[73073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzyxfwyyxaxlfjaooouogcpuvxrshevr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539350.7406285-404-41558666295884/AnsiballZ_stat.py'
Jan 27 18:42:31 compute-0 sudo[73073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:31 compute-0 python3.9[73075]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:31 compute-0 sudo[73073]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:31 compute-0 sudo[73196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tngqydzowkldduzxzsptkvgomgvpifzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539350.7406285-404-41558666295884/AnsiballZ_copy.py'
Jan 27 18:42:31 compute-0 sudo[73196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:31 compute-0 python3.9[73198]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539350.7406285-404-41558666295884/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:31 compute-0 sudo[73196]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:32 compute-0 sudo[73348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiyssdgnnnrdzqxanrdnadjjnxuxszww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539352.0088174-419-201698129682038/AnsiballZ_stat.py'
Jan 27 18:42:32 compute-0 sudo[73348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:32 compute-0 python3.9[73350]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:32 compute-0 sudo[73348]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:32 compute-0 sudo[73471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeqimvvkirtjexelnnigiigvvrzxcbba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539352.0088174-419-201698129682038/AnsiballZ_copy.py'
Jan 27 18:42:32 compute-0 sudo[73471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:33 compute-0 python3.9[73473]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539352.0088174-419-201698129682038/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:33 compute-0 sudo[73471]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:33 compute-0 sudo[73623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puwdukqmyfufczoxrngbkbdaukwmtkoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539353.3373704-434-231685405290918/AnsiballZ_stat.py'
Jan 27 18:42:33 compute-0 sudo[73623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:33 compute-0 python3.9[73625]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:42:33 compute-0 sudo[73623]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:34 compute-0 sudo[73746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asmpzbeozdzcdulbsqbthdafmldlsdyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539353.3373704-434-231685405290918/AnsiballZ_copy.py'
Jan 27 18:42:34 compute-0 sudo[73746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:34 compute-0 python3.9[73748]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539353.3373704-434-231685405290918/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:34 compute-0 sudo[73746]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:35 compute-0 sudo[73898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzbbobxqvyqnloycszmlrknkcygnxdsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539354.755269-449-243879628380585/AnsiballZ_file.py'
Jan 27 18:42:35 compute-0 sudo[73898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:35 compute-0 python3.9[73900]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:35 compute-0 sudo[73898]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:35 compute-0 sudo[74050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdfqxoazsvusdrwunvlpgpjskfrkuueo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539355.5393167-457-215435990685175/AnsiballZ_command.py'
Jan 27 18:42:35 compute-0 sudo[74050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:35 compute-0 python3.9[74052]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:42:36 compute-0 sudo[74050]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:36 compute-0 sudo[74209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eseemulsxwgbiwrythzrhzocrawojqcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539356.2884116-465-124024869088202/AnsiballZ_blockinfile.py'
Jan 27 18:42:36 compute-0 sudo[74209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:37 compute-0 python3.9[74211]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:37 compute-0 sudo[74209]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:37 compute-0 sshd-session[74213]: Invalid user solana from 45.148.10.240 port 37188
Jan 27 18:42:37 compute-0 sshd-session[74213]: Connection closed by invalid user solana 45.148.10.240 port 37188 [preauth]
Jan 27 18:42:37 compute-0 sudo[74364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yipgsdhhcscbnuykphyerfjmmenbvjxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539357.2827315-474-29175950650764/AnsiballZ_file.py'
Jan 27 18:42:37 compute-0 sudo[74364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:38 compute-0 python3.9[74366]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:38 compute-0 sudo[74364]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:38 compute-0 sudo[74516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kraaforxgjjidaaqegkiytpdmezhfcpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539358.254473-474-134401116181267/AnsiballZ_file.py'
Jan 27 18:42:38 compute-0 sudo[74516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:38 compute-0 python3.9[74518]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:38 compute-0 sudo[74516]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:39 compute-0 sudo[74668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pktqrvqtkbkvuabdzyzvpqdlcgvuamdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539359.0187323-489-243442375904000/AnsiballZ_mount.py'
Jan 27 18:42:39 compute-0 sudo[74668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:39 compute-0 python3.9[74670]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 18:42:39 compute-0 sudo[74668]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:39 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 18:42:39 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 18:42:40 compute-0 sudo[74822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uimrowjtlkkojsxljgsiopsjwsorfuyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539359.9900153-489-78558234060670/AnsiballZ_mount.py'
Jan 27 18:42:40 compute-0 sudo[74822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:40 compute-0 python3.9[74824]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 18:42:40 compute-0 sudo[74822]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:40 compute-0 sshd-session[65662]: Connection closed by 192.168.122.31 port 58858
Jan 27 18:42:40 compute-0 sshd-session[65659]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:42:40 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 27 18:42:40 compute-0 systemd[1]: session-14.scope: Consumed 38.750s CPU time.
Jan 27 18:42:40 compute-0 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Jan 27 18:42:40 compute-0 systemd-logind[795]: Removed session 14.
Jan 27 18:42:46 compute-0 sshd-session[74850]: Accepted publickey for zuul from 192.168.122.31 port 49594 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:42:46 compute-0 systemd-logind[795]: New session 15 of user zuul.
Jan 27 18:42:46 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 27 18:42:46 compute-0 sshd-session[74850]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:42:47 compute-0 sudo[75003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cflssadbpavlfdpsavdqguljrjovguwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539367.0481267-16-53251584071622/AnsiballZ_tempfile.py'
Jan 27 18:42:47 compute-0 sudo[75003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:47 compute-0 python3.9[75005]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 27 18:42:47 compute-0 sudo[75003]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:48 compute-0 sudo[75155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqssmiqdzzjdtbnzwzxtyazcjnbajdxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539367.9297814-28-61581024200888/AnsiballZ_stat.py'
Jan 27 18:42:48 compute-0 sudo[75155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:48 compute-0 python3.9[75157]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:42:48 compute-0 sudo[75155]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:49 compute-0 sudo[75307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaqodwcnqqjfczzyeqzitqqdcylwhdtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539368.833127-38-142268941600413/AnsiballZ_setup.py'
Jan 27 18:42:49 compute-0 sudo[75307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:49 compute-0 python3.9[75309]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:42:49 compute-0 sudo[75307]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:50 compute-0 sudo[75459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibazdsyqxnyqkwizygyompjiileuqgji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539370.101002-47-94799274708148/AnsiballZ_blockinfile.py'
Jan 27 18:42:50 compute-0 sudo[75459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:50 compute-0 python3.9[75461]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNxKTFWpLU0Jh4NExBBthatauAtOYJaUCz9xicome9k8ofzXcGeN3cYRJwvT4Pvz9/9Oj8yNpV7U6i5ddZv2aveajBixoyXmQaVoU218JAK21XFp0tKm4vHY0zNYU21qeBQz1TdhGX6ZV85TG+DTFhqoesXsI377VKcqDVPSNZ0La6D08pnP7N+z6Jg7GJ47vFX5+D1Ak4EH9yNiGMB1Nt0I7jrOqVgiHw0CP2tHlssXQPJk77CH/S7He15QDlI2aS66/dPWW638fU+AbqLrCo9ouvBCEswpDMnisHTtPfzHhlf6scfc4vDg1evGYPkEOfa/hPp8MzsI6rCaGXQJ8XLLVo1GurGTIH6UijYH8ATr0gz2at8JaaKZYEzrErt6sxULxMcQzq8C5JdtYjoWKnY9G8xN50dLUqmGGy6KZDDjyrxuE9f7KhDZAQyp49lDOtg7Ob0oP4iJxBVVH+QSUFzkxnsfr1EG9K9uNOOPLtCizwrnby/lPT3tFC2XxAsSs=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC/+KgT6wXou8Hc9xlWV+W2VspkEvRY1fMFndf2yEryZ
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA6LJMHzcQ9c5oRxhq+dfEWNaxINW9NCxtSBquPjFO0VbU1c7gCAOZc0K/CjQr3w3mwVERwqpGeCSIQVSuKggDA=
                                             create=True mode=0644 path=/tmp/ansible.5487r34a state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:50 compute-0 sudo[75459]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:50 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 18:42:51 compute-0 sudo[75613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eksfmdrcjncmttjfgozkvockxvkjkpym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539371.02496-55-122427898672327/AnsiballZ_command.py'
Jan 27 18:42:51 compute-0 sudo[75613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:51 compute-0 python3.9[75615]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.5487r34a' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:42:51 compute-0 sudo[75613]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:52 compute-0 sudo[75767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqtjattpqmzlqlspvuetpwbojhrtlhft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539371.8855417-63-195873238857312/AnsiballZ_file.py'
Jan 27 18:42:52 compute-0 sudo[75767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:42:52 compute-0 python3.9[75769]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.5487r34a state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:42:52 compute-0 sudo[75767]: pam_unix(sudo:session): session closed for user root
Jan 27 18:42:53 compute-0 sshd-session[74853]: Connection closed by 192.168.122.31 port 49594
Jan 27 18:42:53 compute-0 sshd-session[74850]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:42:53 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 27 18:42:53 compute-0 systemd[1]: session-15.scope: Consumed 3.867s CPU time.
Jan 27 18:42:53 compute-0 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Jan 27 18:42:53 compute-0 systemd-logind[795]: Removed session 15.
Jan 27 18:42:57 compute-0 sshd-session[75794]: Accepted publickey for zuul from 192.168.122.31 port 35738 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:42:57 compute-0 systemd-logind[795]: New session 16 of user zuul.
Jan 27 18:42:57 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 27 18:42:57 compute-0 sshd-session[75794]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:42:59 compute-0 python3.9[75947]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:42:59 compute-0 sudo[76101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scyevyokghsgnkdqvpmcejxbbzcqjcwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539379.384224-27-263714401842713/AnsiballZ_systemd.py'
Jan 27 18:42:59 compute-0 sudo[76101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:00 compute-0 python3.9[76103]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 18:43:01 compute-0 sudo[76101]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:01 compute-0 sudo[76255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuelnnupxkoamarfnfuhmczvqoirectj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539381.5796075-35-59793924099798/AnsiballZ_systemd.py'
Jan 27 18:43:01 compute-0 sudo[76255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:02 compute-0 python3.9[76257]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:43:02 compute-0 sudo[76255]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:02 compute-0 sudo[76408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogtvlinrkfolymwdmeykgleouunqwvsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539382.4285264-44-134563641648371/AnsiballZ_command.py'
Jan 27 18:43:02 compute-0 sudo[76408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:03 compute-0 python3.9[76410]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:43:03 compute-0 sudo[76408]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:03 compute-0 sudo[76561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdewzgltccvstwhtqptxsjognjagwfnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539383.2843113-52-82614829803410/AnsiballZ_stat.py'
Jan 27 18:43:03 compute-0 sudo[76561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:03 compute-0 python3.9[76563]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:43:04 compute-0 sudo[76561]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:04 compute-0 sudo[76715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfhvbtrwxshpbjleewccccogjcmcraeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539384.158006-60-142252678373979/AnsiballZ_command.py'
Jan 27 18:43:04 compute-0 sudo[76715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:04 compute-0 python3.9[76717]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:43:04 compute-0 sudo[76715]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:05 compute-0 sudo[76870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcjluhbhrrcndiqkmtmdhfsnctcbmreu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539384.877248-68-182745812664810/AnsiballZ_file.py'
Jan 27 18:43:05 compute-0 sudo[76870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:05 compute-0 python3.9[76872]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:05 compute-0 sudo[76870]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:06 compute-0 sshd-session[75797]: Connection closed by 192.168.122.31 port 35738
Jan 27 18:43:06 compute-0 sshd-session[75794]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:43:06 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 27 18:43:06 compute-0 systemd[1]: session-16.scope: Consumed 4.731s CPU time.
Jan 27 18:43:06 compute-0 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Jan 27 18:43:06 compute-0 systemd-logind[795]: Removed session 16.
Jan 27 18:43:11 compute-0 sshd-session[76897]: Accepted publickey for zuul from 192.168.122.31 port 40210 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:43:11 compute-0 systemd-logind[795]: New session 17 of user zuul.
Jan 27 18:43:11 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 27 18:43:11 compute-0 sshd-session[76897]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:43:12 compute-0 python3.9[77050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:43:13 compute-0 sudo[77204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-batsqfshwvcfavpaimkcfwazrjwxqfdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539392.932356-29-224142328725720/AnsiballZ_setup.py'
Jan 27 18:43:13 compute-0 sudo[77204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:13 compute-0 python3.9[77206]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:43:13 compute-0 sudo[77204]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:14 compute-0 sudo[77288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfpvhlrngwqhcdmkokdacdadzwhgwlcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539392.932356-29-224142328725720/AnsiballZ_dnf.py'
Jan 27 18:43:14 compute-0 sudo[77288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:14 compute-0 python3.9[77290]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 18:43:15 compute-0 sudo[77288]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:16 compute-0 python3.9[77441]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:43:17 compute-0 python3.9[77592]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 18:43:18 compute-0 python3.9[77742]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:43:19 compute-0 python3.9[77892]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:43:19 compute-0 sshd-session[76900]: Connection closed by 192.168.122.31 port 40210
Jan 27 18:43:20 compute-0 sshd-session[76897]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:43:20 compute-0 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Jan 27 18:43:20 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 27 18:43:20 compute-0 systemd[1]: session-17.scope: Consumed 6.170s CPU time.
Jan 27 18:43:20 compute-0 systemd-logind[795]: Removed session 17.
Jan 27 18:43:25 compute-0 sshd-session[77917]: Accepted publickey for zuul from 192.168.122.31 port 36910 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:43:25 compute-0 systemd-logind[795]: New session 18 of user zuul.
Jan 27 18:43:25 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 27 18:43:25 compute-0 sshd-session[77917]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:43:26 compute-0 python3.9[78070]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:43:28 compute-0 sudo[78224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkxogdyfrphctrjpnjmydzbfqpyipbcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539407.958487-45-266267593295330/AnsiballZ_file.py'
Jan 27 18:43:28 compute-0 sudo[78224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:28 compute-0 python3.9[78226]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:28 compute-0 sudo[78224]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:29 compute-0 sudo[78376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wluqmnilktpummucyaynfckyodagqhlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539408.8123302-45-263260901682717/AnsiballZ_file.py'
Jan 27 18:43:29 compute-0 sudo[78376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:29 compute-0 python3.9[78378]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:29 compute-0 sudo[78376]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:29 compute-0 sudo[78528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmwonndcsbardodfwljxgxjurybxlpdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539409.5089614-60-66915098739980/AnsiballZ_stat.py'
Jan 27 18:43:29 compute-0 sudo[78528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:30 compute-0 python3.9[78530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:30 compute-0 sudo[78528]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:30 compute-0 sudo[78651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rakfpjkpbvszqyxuzajeudgtnunusgbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539409.5089614-60-66915098739980/AnsiballZ_copy.py'
Jan 27 18:43:30 compute-0 sudo[78651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:30 compute-0 python3.9[78653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539409.5089614-60-66915098739980/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4b33ce3539d02e3086f9495c40c781648f7548ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:30 compute-0 sudo[78651]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:31 compute-0 sudo[78803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yejdgrzghwnpabjriurtnejlynhdejlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539410.9367437-60-260858864952210/AnsiballZ_stat.py'
Jan 27 18:43:31 compute-0 sudo[78803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:31 compute-0 python3.9[78805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:31 compute-0 sudo[78803]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:31 compute-0 sudo[78926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okwxbbjtvcufzdqsgvfyudlcmlvecfeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539410.9367437-60-260858864952210/AnsiballZ_copy.py'
Jan 27 18:43:31 compute-0 sudo[78926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:31 compute-0 python3.9[78928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539410.9367437-60-260858864952210/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=376e049d50e92ab318043cf41e8466eb78ef9a12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:31 compute-0 sudo[78926]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:32 compute-0 sudo[79078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyneninlzigtvznvnydlajotnzifusrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539412.0993583-60-218489444693394/AnsiballZ_stat.py'
Jan 27 18:43:32 compute-0 sudo[79078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:32 compute-0 python3.9[79080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:32 compute-0 sudo[79078]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:32 compute-0 sudo[79201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgpcoyinbftfidvmrloixjwtxajlnnfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539412.0993583-60-218489444693394/AnsiballZ_copy.py'
Jan 27 18:43:32 compute-0 sudo[79201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:33 compute-0 python3.9[79203]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539412.0993583-60-218489444693394/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a2912d82b99c95e6e2be3d8cc7d7673c89acb5f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:33 compute-0 sudo[79201]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:33 compute-0 sudo[79353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobwnnlprxwkkbtjidnaovgejbwiwsaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539413.4227345-104-239702058192533/AnsiballZ_file.py'
Jan 27 18:43:33 compute-0 sudo[79353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:33 compute-0 python3.9[79355]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:33 compute-0 sudo[79353]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:34 compute-0 sudo[79505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aflolshxqwiidrqvtdciahmeekdanuzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539414.0664656-104-218325494667844/AnsiballZ_file.py'
Jan 27 18:43:34 compute-0 sudo[79505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:34 compute-0 python3.9[79507]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:34 compute-0 sudo[79505]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:35 compute-0 sudo[79657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtmqlzyjuxxfsrbwqchaxvbcdemxcoey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539414.7982934-119-199874656886125/AnsiballZ_stat.py'
Jan 27 18:43:35 compute-0 sudo[79657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:35 compute-0 python3.9[79659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:35 compute-0 sudo[79657]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:35 compute-0 sudo[79780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyguweccbyuuqbddbtgxnkgnojhxnedq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539414.7982934-119-199874656886125/AnsiballZ_copy.py'
Jan 27 18:43:35 compute-0 sudo[79780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:35 compute-0 python3.9[79782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539414.7982934-119-199874656886125/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b13756ab3cd9966b40496e1037a2d31badca2333 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:35 compute-0 sudo[79780]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:36 compute-0 sudo[79932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iigxcxkgxdpqcmxximvevprlbgzelxpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539416.1536865-119-240360681190727/AnsiballZ_stat.py'
Jan 27 18:43:36 compute-0 sudo[79932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:36 compute-0 python3.9[79934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:36 compute-0 sudo[79932]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:37 compute-0 sudo[80055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtdbtfwoyeocumnfraeiatsvbdsbupmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539416.1536865-119-240360681190727/AnsiballZ_copy.py'
Jan 27 18:43:37 compute-0 sudo[80055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:37 compute-0 python3.9[80057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539416.1536865-119-240360681190727/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=376e049d50e92ab318043cf41e8466eb78ef9a12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:37 compute-0 sudo[80055]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:37 compute-0 sudo[80207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdkjmbqiakfhfavaokjnofotvesqbbgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539417.4752705-119-76804074600672/AnsiballZ_stat.py'
Jan 27 18:43:37 compute-0 sudo[80207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:38 compute-0 python3.9[80209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:38 compute-0 sudo[80207]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:38 compute-0 sudo[80332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krfcwzustjzzifntxbxoluijmumitjos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539417.4752705-119-76804074600672/AnsiballZ_copy.py'
Jan 27 18:43:38 compute-0 sudo[80332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:38 compute-0 python3.9[80334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539417.4752705-119-76804074600672/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=22f30fbef70ffc598f84919186996d6b80af08a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:38 compute-0 sudo[80332]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:38 compute-0 sshd-session[80257]: Received disconnect from 45.148.10.141 port 12764:11:  [preauth]
Jan 27 18:43:38 compute-0 sshd-session[80257]: Disconnected from authenticating user root 45.148.10.141 port 12764 [preauth]
Jan 27 18:43:39 compute-0 sudo[80484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kerqlnqivdwwmpfriwgxregabuwolabk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539418.8583665-163-114810663119162/AnsiballZ_file.py'
Jan 27 18:43:39 compute-0 sudo[80484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:39 compute-0 python3.9[80486]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:39 compute-0 sudo[80484]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:39 compute-0 sudo[80636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utdwwumoiwqylknidxhfknisllzlrdrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539419.5089304-163-234353078011099/AnsiballZ_file.py'
Jan 27 18:43:39 compute-0 sudo[80636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:40 compute-0 python3.9[80638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:40 compute-0 sudo[80636]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:40 compute-0 sudo[80788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmxuavsjfonkifecbpjuuwvyzzunerup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539420.231952-178-242255756985072/AnsiballZ_stat.py'
Jan 27 18:43:40 compute-0 sudo[80788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:40 compute-0 python3.9[80790]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:40 compute-0 sudo[80788]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:41 compute-0 sudo[80911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baofufgtamzifzitpvgedckmqytcexaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539420.231952-178-242255756985072/AnsiballZ_copy.py'
Jan 27 18:43:41 compute-0 sudo[80911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:41 compute-0 python3.9[80913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539420.231952-178-242255756985072/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=63c97eca1703a1b5d1e1b70ed3ce798e1c9e0a27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:41 compute-0 sudo[80911]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:41 compute-0 sudo[81063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nweqpwalzkgflniltkgbteccmllnsmfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539421.4485278-178-125704260016214/AnsiballZ_stat.py'
Jan 27 18:43:41 compute-0 sudo[81063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:41 compute-0 python3.9[81065]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:41 compute-0 sudo[81063]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:42 compute-0 sudo[81186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcdzxzkkmfkkbzxgbhumtxzgimtiolfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539421.4485278-178-125704260016214/AnsiballZ_copy.py'
Jan 27 18:43:42 compute-0 sudo[81186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:42 compute-0 python3.9[81188]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539421.4485278-178-125704260016214/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=569983934374fb202f13827ddee97db8081c1168 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:42 compute-0 sudo[81186]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:42 compute-0 sudo[81338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjnohqbwmteccuuznyjksrbseyvnfzbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539422.6696265-178-274022305065438/AnsiballZ_stat.py'
Jan 27 18:43:42 compute-0 sudo[81338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:43 compute-0 python3.9[81340]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:43 compute-0 sudo[81338]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:43 compute-0 sudo[81461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsixkkwetabambrfduxmvwyudbvfjqgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539422.6696265-178-274022305065438/AnsiballZ_copy.py'
Jan 27 18:43:43 compute-0 sudo[81461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:43 compute-0 python3.9[81463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539422.6696265-178-274022305065438/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=82eb25dac1350c5d6ebee68c671cd54d314503d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:43 compute-0 sudo[81461]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:44 compute-0 sudo[81613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gojvlecawwznooqpjrapbbssgnlmmynh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539423.9315894-222-84263151000699/AnsiballZ_file.py'
Jan 27 18:43:44 compute-0 sudo[81613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:44 compute-0 python3.9[81615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:44 compute-0 sudo[81613]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:44 compute-0 sudo[81765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cusurlpenydhsxbefuxjkjmjdfleffri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539424.601249-222-12668199602280/AnsiballZ_file.py'
Jan 27 18:43:44 compute-0 sudo[81765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:45 compute-0 python3.9[81767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:45 compute-0 sudo[81765]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:45 compute-0 sudo[81917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrxcfqvkdgassrdjapqmapgexombxsad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539425.2898076-237-198873382837938/AnsiballZ_stat.py'
Jan 27 18:43:45 compute-0 sudo[81917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:45 compute-0 python3.9[81919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:45 compute-0 sudo[81917]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:46 compute-0 sudo[82040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxuxhsgtthjpkapznrmzvbaiyeuxlmhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539425.2898076-237-198873382837938/AnsiballZ_copy.py'
Jan 27 18:43:46 compute-0 sudo[82040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:46 compute-0 python3.9[82042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539425.2898076-237-198873382837938/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=41044d3a972d47f9f39aca6c70ca936c2934d2e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:46 compute-0 sudo[82040]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:46 compute-0 sudo[82192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwslflvecaqhdisvdeydwodxjtokewll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539426.4720137-237-5794997968075/AnsiballZ_stat.py'
Jan 27 18:43:46 compute-0 sudo[82192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:47 compute-0 python3.9[82194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:47 compute-0 sudo[82192]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:47 compute-0 sudo[82315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruchgitzmxxtdjigtfhblsvnqzihndji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539426.4720137-237-5794997968075/AnsiballZ_copy.py'
Jan 27 18:43:47 compute-0 sudo[82315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:47 compute-0 python3.9[82317]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539426.4720137-237-5794997968075/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=deb4343f7c50f784836d5ec2359c82b86f97b737 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:47 compute-0 sudo[82315]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:48 compute-0 sudo[82467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgokhuvllxlpazifrrekxbehkigdhjjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539427.7394934-237-95141460765907/AnsiballZ_stat.py'
Jan 27 18:43:48 compute-0 sudo[82467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:48 compute-0 python3.9[82469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:48 compute-0 sudo[82467]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:48 compute-0 sudo[82590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggasvxtplkoeyhlduletqreuoisumxcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539427.7394934-237-95141460765907/AnsiballZ_copy.py'
Jan 27 18:43:48 compute-0 sudo[82590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:48 compute-0 python3.9[82592]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539427.7394934-237-95141460765907/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8fa253cfc7619d15776924ea1469cfcf3791a5c2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:48 compute-0 sudo[82590]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:49 compute-0 sudo[82742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbwdnxitcjovhmbxtdfmtuipldjqpae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539429.0292141-281-179195422069029/AnsiballZ_file.py'
Jan 27 18:43:49 compute-0 sudo[82742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:49 compute-0 python3.9[82744]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:49 compute-0 sudo[82742]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:50 compute-0 sudo[82894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glpexpqmtxysjezonuqrlneunmimkijo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539429.7437365-281-85191940912893/AnsiballZ_file.py'
Jan 27 18:43:50 compute-0 sudo[82894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:50 compute-0 python3.9[82896]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:50 compute-0 sudo[82894]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:50 compute-0 sudo[83046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytwrohzooalidtieoqbsfucazvcuzixq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539430.5316708-296-87526366541571/AnsiballZ_stat.py'
Jan 27 18:43:50 compute-0 sudo[83046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:51 compute-0 python3.9[83048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:51 compute-0 sudo[83046]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:51 compute-0 chronyd[65633]: Selected source 216.232.132.95 (pool.ntp.org)
Jan 27 18:43:51 compute-0 sudo[83169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsfvogwodzbrvjaqqwxrtfjixyhhchph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539430.5316708-296-87526366541571/AnsiballZ_copy.py'
Jan 27 18:43:51 compute-0 sudo[83169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:51 compute-0 python3.9[83171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539430.5316708-296-87526366541571/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4b6e65e1b021a2623955f27e77f7b84f4020e8d7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:51 compute-0 sudo[83169]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:52 compute-0 sudo[83321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzrxynmkvyqnxyfhdgkdoadwjdwsnjoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539431.83242-296-202032757635500/AnsiballZ_stat.py'
Jan 27 18:43:52 compute-0 sudo[83321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:52 compute-0 python3.9[83323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:52 compute-0 sudo[83321]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:52 compute-0 sudo[83444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-batbapjlcounsbghgjvjqmzvsttmhppj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539431.83242-296-202032757635500/AnsiballZ_copy.py'
Jan 27 18:43:52 compute-0 sudo[83444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:52 compute-0 python3.9[83446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539431.83242-296-202032757635500/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=569983934374fb202f13827ddee97db8081c1168 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:52 compute-0 sudo[83444]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:53 compute-0 sudo[83596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yelpxxjidtfodcrbtqvorruyoodmoedk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539433.0335984-296-244060386570283/AnsiballZ_stat.py'
Jan 27 18:43:53 compute-0 sudo[83596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:53 compute-0 python3.9[83598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:53 compute-0 sudo[83596]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:53 compute-0 sudo[83719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucehryemfarwspuwpfvnskwlokmczqhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539433.0335984-296-244060386570283/AnsiballZ_copy.py'
Jan 27 18:43:53 compute-0 sudo[83719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:54 compute-0 python3.9[83721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539433.0335984-296-244060386570283/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=056d50ee642c9cd0527ae0a2f80e9faa8c0b0970 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:54 compute-0 sudo[83719]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:55 compute-0 sudo[83871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsjkjppmapwyfdhecfyrugdwhkztiwti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539434.8853192-356-170464316395687/AnsiballZ_file.py'
Jan 27 18:43:55 compute-0 sudo[83871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:55 compute-0 python3.9[83873]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:55 compute-0 sudo[83871]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:55 compute-0 sudo[84023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmeucjhpzhmbdgyptvkrubspgjwxqckj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539435.611536-364-102255573391036/AnsiballZ_stat.py'
Jan 27 18:43:55 compute-0 sudo[84023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:56 compute-0 python3.9[84025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:56 compute-0 sudo[84023]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:56 compute-0 sudo[84146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xftlgkwqgcjahmvmuuplruicdgsgfiph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539435.611536-364-102255573391036/AnsiballZ_copy.py'
Jan 27 18:43:56 compute-0 sudo[84146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:57 compute-0 python3.9[84148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539435.611536-364-102255573391036/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:57 compute-0 sudo[84146]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:57 compute-0 sudo[84298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywlanvmwdewewptbnlxlvvtlslyixxtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539437.243692-380-200461943395033/AnsiballZ_file.py'
Jan 27 18:43:57 compute-0 sudo[84298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:57 compute-0 python3.9[84300]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:57 compute-0 sudo[84298]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:58 compute-0 sudo[84450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdwlnpyjqgymmswviqhtcdsrvrnstzzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539437.923423-388-89379657300620/AnsiballZ_stat.py'
Jan 27 18:43:58 compute-0 sudo[84450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:58 compute-0 python3.9[84452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:43:58 compute-0 sudo[84450]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:58 compute-0 sudo[84573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcyibnauzbdxilbwlgvflyyhpdiwntuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539437.923423-388-89379657300620/AnsiballZ_copy.py'
Jan 27 18:43:58 compute-0 sudo[84573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:59 compute-0 python3.9[84575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539437.923423-388-89379657300620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:43:59 compute-0 sudo[84573]: pam_unix(sudo:session): session closed for user root
Jan 27 18:43:59 compute-0 sudo[84725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrizqhcawjbmgvwlroewzwxrblfsizvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539439.2513962-404-42301811600490/AnsiballZ_file.py'
Jan 27 18:43:59 compute-0 sudo[84725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:43:59 compute-0 python3.9[84727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:43:59 compute-0 sudo[84725]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:00 compute-0 sudo[84877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzejkrlcyglsibwibucrnwkfoemnesi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539439.9473057-412-118034820582284/AnsiballZ_stat.py'
Jan 27 18:44:00 compute-0 sudo[84877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:00 compute-0 python3.9[84879]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:00 compute-0 sudo[84877]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:00 compute-0 sudo[85000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kswauiefzvaprhkacotcnzsodphqgpgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539439.9473057-412-118034820582284/AnsiballZ_copy.py'
Jan 27 18:44:00 compute-0 sudo[85000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:01 compute-0 python3.9[85002]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539439.9473057-412-118034820582284/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:01 compute-0 sudo[85000]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:01 compute-0 sudo[85152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smrxwfnwaqiqvtzmupoovaphcjhcmrib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539441.2542899-428-83598564764315/AnsiballZ_file.py'
Jan 27 18:44:01 compute-0 sudo[85152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:01 compute-0 python3.9[85154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:01 compute-0 sudo[85152]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:02 compute-0 sudo[85304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgeddhwrvfrlpsytcmocurcfjbylsqrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539441.935361-436-80306687073361/AnsiballZ_stat.py'
Jan 27 18:44:02 compute-0 sudo[85304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:02 compute-0 python3.9[85306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:02 compute-0 sudo[85304]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:02 compute-0 sudo[85427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auvzonahrjkiudqxfeuazzsczdntjsnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539441.935361-436-80306687073361/AnsiballZ_copy.py'
Jan 27 18:44:02 compute-0 sudo[85427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:02 compute-0 python3.9[85429]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539441.935361-436-80306687073361/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:03 compute-0 sudo[85427]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:03 compute-0 sudo[85579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hunfptmeybkgmkrfmahufvjqecvpvoib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539443.1919882-452-56184796688536/AnsiballZ_file.py'
Jan 27 18:44:03 compute-0 sudo[85579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:03 compute-0 python3.9[85581]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:03 compute-0 sudo[85579]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:04 compute-0 sudo[85731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmsrburfgghmtkzyflgxshwsruaohtie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539443.8894227-460-34044075425130/AnsiballZ_stat.py'
Jan 27 18:44:04 compute-0 sudo[85731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:04 compute-0 python3.9[85733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:04 compute-0 sudo[85731]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:04 compute-0 sudo[85854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnikdtxhzmcuvfyicrhsdqeymlobcelo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539443.8894227-460-34044075425130/AnsiballZ_copy.py'
Jan 27 18:44:04 compute-0 sudo[85854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:05 compute-0 python3.9[85856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539443.8894227-460-34044075425130/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:05 compute-0 sudo[85854]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:05 compute-0 sudo[86006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odangqbogmvizewfqgfndxxrpckunsaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539445.3681464-476-140682291065322/AnsiballZ_file.py'
Jan 27 18:44:05 compute-0 sudo[86006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:05 compute-0 python3.9[86008]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:05 compute-0 sudo[86006]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:06 compute-0 sudo[86158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypljxnvayzkfpvkgsiixmggnmlsqonwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539446.0982835-484-42450957335165/AnsiballZ_stat.py'
Jan 27 18:44:06 compute-0 sudo[86158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:06 compute-0 python3.9[86160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:06 compute-0 sudo[86158]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:07 compute-0 sudo[86281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwzermjozedgspwwzgntsgstuexujion ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539446.0982835-484-42450957335165/AnsiballZ_copy.py'
Jan 27 18:44:07 compute-0 sudo[86281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:07 compute-0 python3.9[86283]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539446.0982835-484-42450957335165/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:07 compute-0 sudo[86281]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:07 compute-0 sudo[86433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uncsvzsunvwluqzhomrfthyyrwajrbhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539447.5005438-500-166434448937030/AnsiballZ_file.py'
Jan 27 18:44:07 compute-0 sudo[86433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:08 compute-0 python3.9[86435]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:08 compute-0 sudo[86433]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:08 compute-0 sudo[86585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgqswindyizdxguqomqiuwfobqybmrtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539448.2083178-508-13421048642364/AnsiballZ_stat.py'
Jan 27 18:44:08 compute-0 sudo[86585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:08 compute-0 python3.9[86587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:08 compute-0 sudo[86585]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:09 compute-0 sudo[86708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfchctcvpnmovrvyrzzxtddbtwvrywxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539448.2083178-508-13421048642364/AnsiballZ_copy.py'
Jan 27 18:44:09 compute-0 sudo[86708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:09 compute-0 python3.9[86710]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539448.2083178-508-13421048642364/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:09 compute-0 sudo[86708]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:09 compute-0 sudo[86860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qffyoqkkqroiwkglsrnfytuvwhazvjyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539449.468152-524-196702852358827/AnsiballZ_file.py'
Jan 27 18:44:09 compute-0 sudo[86860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:09 compute-0 python3.9[86862]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:09 compute-0 sudo[86860]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:10 compute-0 sudo[87012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvwxnfzyjldbaaywlipgoxkoqjebsbag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539450.125389-532-245447350291279/AnsiballZ_stat.py'
Jan 27 18:44:10 compute-0 sudo[87012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:10 compute-0 python3.9[87014]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:10 compute-0 sudo[87012]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:11 compute-0 sudo[87135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcwodigvdlmuwiymdhsbqcujieilkpvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539450.125389-532-245447350291279/AnsiballZ_copy.py'
Jan 27 18:44:11 compute-0 sudo[87135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:11 compute-0 python3.9[87137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539450.125389-532-245447350291279/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=15265196e336ee28e0080070f65aca785e7a9615 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:11 compute-0 sudo[87135]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:11 compute-0 sshd-session[77920]: Connection closed by 192.168.122.31 port 36910
Jan 27 18:44:11 compute-0 sshd-session[77917]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:44:11 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 27 18:44:11 compute-0 systemd[1]: session-18.scope: Consumed 37.471s CPU time.
Jan 27 18:44:11 compute-0 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Jan 27 18:44:11 compute-0 systemd-logind[795]: Removed session 18.
Jan 27 18:44:17 compute-0 sshd-session[87163]: Accepted publickey for zuul from 192.168.122.31 port 34624 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:44:17 compute-0 systemd-logind[795]: New session 19 of user zuul.
Jan 27 18:44:17 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 27 18:44:17 compute-0 sshd-session[87163]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:44:18 compute-0 python3.9[87316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:44:19 compute-0 sudo[87470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwthjuzmggcgsxkkrbddggrsuanaixrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539459.0269053-29-109978124301981/AnsiballZ_file.py'
Jan 27 18:44:19 compute-0 sudo[87470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:19 compute-0 python3.9[87472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:19 compute-0 sudo[87470]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:20 compute-0 sudo[87622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nufpovzyfohwyjwbdjwvehfgdgmixkuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539460.0772626-29-225613295975244/AnsiballZ_file.py'
Jan 27 18:44:20 compute-0 sudo[87622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:20 compute-0 python3.9[87624]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:20 compute-0 sudo[87622]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:21 compute-0 python3.9[87774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:44:21 compute-0 sudo[87924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kajgifjwjxydhignvhuvwnlhqaxskibg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539461.5497665-52-156147105302573/AnsiballZ_seboolean.py'
Jan 27 18:44:21 compute-0 sudo[87924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:22 compute-0 python3.9[87926]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 18:44:23 compute-0 sudo[87924]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:23 compute-0 sudo[88080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgbyaneaaqhhvapqymsxmrumslwlolzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539463.6769707-62-252550345285936/AnsiballZ_setup.py'
Jan 27 18:44:24 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 27 18:44:24 compute-0 sudo[88080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:24 compute-0 python3.9[88082]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:44:24 compute-0 sudo[88080]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:24 compute-0 sudo[88164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuqlnvwmujktsfsqoedyiemscbbnwjqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539463.6769707-62-252550345285936/AnsiballZ_dnf.py'
Jan 27 18:44:24 compute-0 sudo[88164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:25 compute-0 python3.9[88166]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:44:26 compute-0 sudo[88164]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:27 compute-0 sudo[88317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgzherrdwewkvjjjohjcibidunumblin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539466.6139674-74-47726064075760/AnsiballZ_systemd.py'
Jan 27 18:44:27 compute-0 sudo[88317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:27 compute-0 python3.9[88319]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 18:44:27 compute-0 sudo[88317]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:28 compute-0 sudo[88472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zezfqigfmongjzmybdwlyishidpuzkzt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769539467.855525-82-60565075462364/AnsiballZ_edpm_nftables_snippet.py'
Jan 27 18:44:28 compute-0 sudo[88472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:28 compute-0 python3[88474]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 27 18:44:28 compute-0 sudo[88472]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:29 compute-0 sudo[88624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzbocgxkqjjzcagxzdyfmhefrbmfqhmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539468.7610323-91-127837497900520/AnsiballZ_file.py'
Jan 27 18:44:29 compute-0 sudo[88624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:29 compute-0 python3.9[88626]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:29 compute-0 sudo[88624]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:29 compute-0 sudo[88776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xypwjprrcapvprjioqgwidadmkkyedsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539469.4271472-99-100987849980402/AnsiballZ_stat.py'
Jan 27 18:44:29 compute-0 sudo[88776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:30 compute-0 python3.9[88778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:30 compute-0 sudo[88776]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:30 compute-0 sudo[88854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuqlmuzfhijpmnowlpbekodcnobtdpyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539469.4271472-99-100987849980402/AnsiballZ_file.py'
Jan 27 18:44:30 compute-0 sudo[88854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:30 compute-0 python3.9[88856]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:30 compute-0 sudo[88854]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:31 compute-0 sudo[89006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxjslotntogagdsapevtklndpqaxfgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539470.7797816-111-132402679817943/AnsiballZ_stat.py'
Jan 27 18:44:31 compute-0 sudo[89006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:31 compute-0 python3.9[89008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:31 compute-0 sudo[89006]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:31 compute-0 sudo[89084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyvqadnhkpxesagelypczbjfcgoiphdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539470.7797816-111-132402679817943/AnsiballZ_file.py'
Jan 27 18:44:31 compute-0 sudo[89084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:31 compute-0 python3.9[89086]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.289oj29y recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:31 compute-0 sudo[89084]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:32 compute-0 sudo[89236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrcgtxomgzboiqjhtqywzsfgspsrlgzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539471.9347923-123-31927322779100/AnsiballZ_stat.py'
Jan 27 18:44:32 compute-0 sudo[89236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:32 compute-0 python3.9[89238]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:32 compute-0 sudo[89236]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:32 compute-0 sudo[89314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yclodpupxxrqwzhelsabcgejrrecalqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539471.9347923-123-31927322779100/AnsiballZ_file.py'
Jan 27 18:44:32 compute-0 sudo[89314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:32 compute-0 python3.9[89316]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:32 compute-0 sudo[89314]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:33 compute-0 sudo[89466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kogbvhnlvirtklzcfxrkldwyueyjqcsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539473.0988283-136-97270092922691/AnsiballZ_command.py'
Jan 27 18:44:33 compute-0 sudo[89466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:33 compute-0 python3.9[89468]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:44:33 compute-0 sudo[89466]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:34 compute-0 sudo[89619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldnaimtgjhsmhjpqcoxpxdsbwtafizba ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769539474.0872726-144-221630071605870/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 18:44:34 compute-0 sudo[89619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:34 compute-0 python3[89621]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 18:44:34 compute-0 sudo[89619]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:35 compute-0 sudo[89771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykrwlhvoupgqodpzaawpusqizzgtdgxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539474.9860728-152-80156655146358/AnsiballZ_stat.py'
Jan 27 18:44:35 compute-0 sudo[89771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:35 compute-0 python3.9[89773]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:35 compute-0 sudo[89771]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:36 compute-0 sudo[89896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruytlykwiwzwlxwguendgrbjwhfnasjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539474.9860728-152-80156655146358/AnsiballZ_copy.py'
Jan 27 18:44:36 compute-0 sudo[89896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:36 compute-0 python3.9[89898]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539474.9860728-152-80156655146358/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:36 compute-0 sudo[89896]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:36 compute-0 sudo[90048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgfbnhmbrhnxwwpgbtibfpzstdhlxurp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539476.5680783-167-3742142777803/AnsiballZ_stat.py'
Jan 27 18:44:36 compute-0 sudo[90048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:37 compute-0 python3.9[90050]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:37 compute-0 sudo[90048]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:37 compute-0 sudo[90173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lijnnaqvozqtkmyqrjluoyogotbntbfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539476.5680783-167-3742142777803/AnsiballZ_copy.py'
Jan 27 18:44:37 compute-0 sudo[90173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:37 compute-0 python3.9[90175]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539476.5680783-167-3742142777803/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:37 compute-0 sudo[90173]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:38 compute-0 sudo[90325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikioxwgaihqaqgfgbetvrlydzppmcxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539477.9032075-182-270790725384525/AnsiballZ_stat.py'
Jan 27 18:44:38 compute-0 sudo[90325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:38 compute-0 python3.9[90327]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:38 compute-0 sudo[90325]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:38 compute-0 sudo[90450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzuublkufwwgwancwdnthmgcmcsixvpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539477.9032075-182-270790725384525/AnsiballZ_copy.py'
Jan 27 18:44:38 compute-0 sudo[90450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:39 compute-0 python3.9[90452]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539477.9032075-182-270790725384525/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:39 compute-0 sudo[90450]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:39 compute-0 sudo[90602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfshivsnbmzegwjyeodujfpdykmfmscp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539479.2350311-197-266451861181403/AnsiballZ_stat.py'
Jan 27 18:44:39 compute-0 sudo[90602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:39 compute-0 python3.9[90604]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:39 compute-0 sudo[90602]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:40 compute-0 sudo[90727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyscqgoibuzslgtvayamxctluabsmmwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539479.2350311-197-266451861181403/AnsiballZ_copy.py'
Jan 27 18:44:40 compute-0 sudo[90727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:40 compute-0 python3.9[90729]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539479.2350311-197-266451861181403/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:40 compute-0 sudo[90727]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:41 compute-0 sudo[90879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcgcvjhwncctbmjiesrenhskqfchdxwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539480.5862627-212-208198600006160/AnsiballZ_stat.py'
Jan 27 18:44:41 compute-0 sudo[90879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:41 compute-0 python3.9[90881]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:41 compute-0 sudo[90879]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:41 compute-0 sudo[91004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqzevznyqkksdrnadfifigdcsxlxxgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539480.5862627-212-208198600006160/AnsiballZ_copy.py'
Jan 27 18:44:41 compute-0 sudo[91004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:41 compute-0 python3.9[91006]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539480.5862627-212-208198600006160/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:41 compute-0 sudo[91004]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:42 compute-0 sudo[91156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzsyeytjgccyyrvebjtomoauilijwsgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539482.1634796-227-140892851610080/AnsiballZ_file.py'
Jan 27 18:44:42 compute-0 sudo[91156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:42 compute-0 python3.9[91158]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:42 compute-0 sudo[91156]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:43 compute-0 sudo[91308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jngphfiixnhzsaozizqqsoxjacodatmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539482.8057384-235-189089601566097/AnsiballZ_command.py'
Jan 27 18:44:43 compute-0 sudo[91308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:43 compute-0 python3.9[91310]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:44:43 compute-0 sudo[91308]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:43 compute-0 sudo[91463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coxkioeumzrwtoygqmdjpewvjhmaatvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539483.4811194-243-245380120256239/AnsiballZ_blockinfile.py'
Jan 27 18:44:43 compute-0 sudo[91463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:44 compute-0 python3.9[91465]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:44 compute-0 sudo[91463]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:44 compute-0 sudo[91615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcxfxbqekiovetzwsmpzizrujkegkso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539484.4207973-252-239514366518962/AnsiballZ_command.py'
Jan 27 18:44:44 compute-0 sudo[91615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:44 compute-0 python3.9[91617]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:44:44 compute-0 sudo[91615]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:45 compute-0 sudo[91768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agdtrjogmarfafddznetbmnzgiwpxawa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539485.0848095-260-63545516122514/AnsiballZ_stat.py'
Jan 27 18:44:45 compute-0 sudo[91768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:45 compute-0 python3.9[91770]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:44:45 compute-0 sudo[91768]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:46 compute-0 sudo[91922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcoubqizxkcxgfbtafgxmbyucdvdkvrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539485.9559042-268-231359127944837/AnsiballZ_command.py'
Jan 27 18:44:46 compute-0 sudo[91922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:46 compute-0 python3.9[91924]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:44:46 compute-0 sudo[91922]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:46 compute-0 sudo[92077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfwpsyovuewmsikkglnuwggldqsgylxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539486.5964847-276-74479982810753/AnsiballZ_file.py'
Jan 27 18:44:46 compute-0 sudo[92077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:47 compute-0 python3.9[92079]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:47 compute-0 sudo[92077]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:48 compute-0 python3.9[92229]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:44:49 compute-0 sudo[92380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phqtguabqidaotteqmmylwjmmpxhdxdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539488.9570234-316-122373938574268/AnsiballZ_command.py'
Jan 27 18:44:49 compute-0 sudo[92380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:49 compute-0 python3.9[92382]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:44:49 compute-0 ovs-vsctl[92383]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 27 18:44:49 compute-0 sudo[92380]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:50 compute-0 sudo[92533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maanhuoezzvkxsgyilrwojczdvbifyne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539489.8222945-325-75206831488452/AnsiballZ_command.py'
Jan 27 18:44:50 compute-0 sudo[92533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:50 compute-0 python3.9[92535]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:44:50 compute-0 sudo[92533]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:50 compute-0 sudo[92688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voghegqjvivlqtxmvxvwnefslrbwymyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539490.4590898-333-121445844007877/AnsiballZ_command.py'
Jan 27 18:44:50 compute-0 sudo[92688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:50 compute-0 python3.9[92690]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:44:50 compute-0 ovs-vsctl[92691]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 27 18:44:50 compute-0 sudo[92688]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:51 compute-0 python3.9[92841]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:44:52 compute-0 sudo[92995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpicehmbwrncpmcsujapzgsqbhzwxbnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539491.7878835-350-230262667481226/AnsiballZ_file.py'
Jan 27 18:44:52 compute-0 sudo[92995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:52 compute-0 sshd-session[92869]: Invalid user sol from 45.148.10.240 port 49140
Jan 27 18:44:52 compute-0 python3.9[92997]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:52 compute-0 sudo[92995]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:52 compute-0 sshd-session[92869]: Connection closed by invalid user sol 45.148.10.240 port 49140 [preauth]
Jan 27 18:44:52 compute-0 sudo[93147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blntacpgvbceonnmhohlljmbygmcvime ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539492.525349-358-245030003486525/AnsiballZ_stat.py'
Jan 27 18:44:52 compute-0 sudo[93147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:53 compute-0 python3.9[93149]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:53 compute-0 sudo[93147]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:53 compute-0 sudo[93225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpxtnopybwmwisudynodmawdjnajmnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539492.525349-358-245030003486525/AnsiballZ_file.py'
Jan 27 18:44:53 compute-0 sudo[93225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:53 compute-0 python3.9[93227]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:53 compute-0 sudo[93225]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:54 compute-0 sudo[93377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmnjxujyllrfbjdyfvzubqoswpgyxyud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539493.7539706-358-266688516492046/AnsiballZ_stat.py'
Jan 27 18:44:54 compute-0 sudo[93377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:54 compute-0 python3.9[93379]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:54 compute-0 sudo[93377]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:54 compute-0 sudo[93455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnlarrvvjwbxloodfefudabkzapioqlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539493.7539706-358-266688516492046/AnsiballZ_file.py'
Jan 27 18:44:54 compute-0 sudo[93455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:54 compute-0 python3.9[93457]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:44:54 compute-0 sudo[93455]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:55 compute-0 sudo[93607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgrybzbzswuewirjnucjbhlpkliaeule ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539494.9662228-381-55589076278302/AnsiballZ_file.py'
Jan 27 18:44:55 compute-0 sudo[93607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:55 compute-0 python3.9[93609]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:55 compute-0 sudo[93607]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:56 compute-0 sudo[93759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imxckghvciikvuwiswzneptdkarzlmwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539495.7200515-389-243981945862983/AnsiballZ_stat.py'
Jan 27 18:44:56 compute-0 sudo[93759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:56 compute-0 python3.9[93761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:56 compute-0 sudo[93759]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:56 compute-0 sudo[93837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmgaiudxjznnaomuntamsizmnqjwmrnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539495.7200515-389-243981945862983/AnsiballZ_file.py'
Jan 27 18:44:56 compute-0 sudo[93837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:56 compute-0 python3.9[93839]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:56 compute-0 sudo[93837]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:57 compute-0 sudo[93989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtoojugjmekqglvdjfyqsxbqhudnqkkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539496.8711402-401-181451961956107/AnsiballZ_stat.py'
Jan 27 18:44:57 compute-0 sudo[93989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:57 compute-0 python3.9[93991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:57 compute-0 sudo[93989]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:57 compute-0 sudo[94067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjxdwlankklpkehmztrgmwwanqyhuhds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539496.8711402-401-181451961956107/AnsiballZ_file.py'
Jan 27 18:44:57 compute-0 sudo[94067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:57 compute-0 python3.9[94069]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:44:57 compute-0 sudo[94067]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:58 compute-0 sudo[94219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywlzitykwykskcwrlteigphyevlrfkcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539498.034326-413-34029203934924/AnsiballZ_systemd.py'
Jan 27 18:44:58 compute-0 sudo[94219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:58 compute-0 python3.9[94221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:44:58 compute-0 systemd[1]: Reloading.
Jan 27 18:44:58 compute-0 systemd-rc-local-generator[94245]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:44:58 compute-0 systemd-sysv-generator[94251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:44:59 compute-0 sudo[94219]: pam_unix(sudo:session): session closed for user root
Jan 27 18:44:59 compute-0 sudo[94407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvwljnujvpuakgfjjvztcfxlghtjibmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539499.2272248-421-254690978277355/AnsiballZ_stat.py'
Jan 27 18:44:59 compute-0 sudo[94407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:44:59 compute-0 python3.9[94409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:44:59 compute-0 sudo[94407]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:00 compute-0 sudo[94485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwlvglhrhqajbrwdgovteadcobmjexns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539499.2272248-421-254690978277355/AnsiballZ_file.py'
Jan 27 18:45:00 compute-0 sudo[94485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:00 compute-0 python3.9[94487]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:00 compute-0 sudo[94485]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:00 compute-0 sudo[94637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agcklablrapvgwlhaikmdujrwgmntvit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539500.534321-433-229564676754523/AnsiballZ_stat.py'
Jan 27 18:45:00 compute-0 sudo[94637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:01 compute-0 python3.9[94639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:01 compute-0 sudo[94637]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:01 compute-0 sudo[94715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycihqkljfhkuabpexsiwisevlhwiuiox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539500.534321-433-229564676754523/AnsiballZ_file.py'
Jan 27 18:45:01 compute-0 sudo[94715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:01 compute-0 python3.9[94717]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:01 compute-0 sudo[94715]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:02 compute-0 sudo[94867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqcvwialplmbrbtitclsuufklrgndqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539501.694062-445-122382000684724/AnsiballZ_systemd.py'
Jan 27 18:45:02 compute-0 sudo[94867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:02 compute-0 python3.9[94869]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:45:02 compute-0 systemd[1]: Reloading.
Jan 27 18:45:02 compute-0 systemd-rc-local-generator[94899]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:45:02 compute-0 systemd-sysv-generator[94903]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:45:02 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 18:45:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 18:45:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 18:45:02 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 18:45:02 compute-0 sudo[94867]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:03 compute-0 sudo[95062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipgwsjszaslywtiyrsrrgolthtqemllc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539502.9279125-455-149244679142110/AnsiballZ_file.py'
Jan 27 18:45:03 compute-0 sudo[95062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:03 compute-0 python3.9[95064]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:03 compute-0 sudo[95062]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:04 compute-0 sudo[95214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nroexgnlcenunnpaxvxrqmunyyoehflx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539503.8896906-463-45460238128556/AnsiballZ_stat.py'
Jan 27 18:45:04 compute-0 sudo[95214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:04 compute-0 python3.9[95216]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:04 compute-0 sudo[95214]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:04 compute-0 sudo[95337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnfdbfgvlxeedwobcsgpkjbojaowmyhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539503.8896906-463-45460238128556/AnsiballZ_copy.py'
Jan 27 18:45:04 compute-0 sudo[95337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:04 compute-0 python3.9[95339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539503.8896906-463-45460238128556/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:04 compute-0 sudo[95337]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:05 compute-0 sudo[95489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsczzikqlynaficynmbjgzqetfrseijm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539505.3921633-480-6027005195191/AnsiballZ_file.py'
Jan 27 18:45:05 compute-0 sudo[95489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:05 compute-0 python3.9[95491]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:05 compute-0 sudo[95489]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:06 compute-0 sudo[95641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqykgmqzselpkhxchvdaqzmroiyltanz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539506.1234388-488-166334567242163/AnsiballZ_file.py'
Jan 27 18:45:06 compute-0 sudo[95641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:06 compute-0 python3.9[95643]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:06 compute-0 sudo[95641]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:07 compute-0 sudo[95793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roogdrzxlgyxixefqsxckkmnafizjvni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539506.9359934-496-170882882372524/AnsiballZ_stat.py'
Jan 27 18:45:07 compute-0 sudo[95793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:07 compute-0 python3.9[95795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:07 compute-0 sudo[95793]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:07 compute-0 sudo[95916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwpjtjumlfueywdzkkmyuylsdeqfxotw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539506.9359934-496-170882882372524/AnsiballZ_copy.py'
Jan 27 18:45:07 compute-0 sudo[95916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:07 compute-0 python3.9[95918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539506.9359934-496-170882882372524/.source.json _original_basename=.v2ujbl4m follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:07 compute-0 sudo[95916]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:08 compute-0 python3.9[96068]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:10 compute-0 sudo[96489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzjpsljnuupxyxrysvyfkbugfzrnypcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539510.176124-536-144604940742760/AnsiballZ_container_config_data.py'
Jan 27 18:45:10 compute-0 sudo[96489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:10 compute-0 python3.9[96491]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 27 18:45:10 compute-0 sudo[96489]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:11 compute-0 sudo[96641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqmuinhlxvhthilkjklbvcyxvqidqdqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539511.249064-547-204377122741187/AnsiballZ_container_config_hash.py'
Jan 27 18:45:11 compute-0 sudo[96641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:11 compute-0 python3.9[96643]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:45:11 compute-0 sudo[96641]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:12 compute-0 sudo[96793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmkxxuhpwyoadgiusfnjoxlalhyclvyr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769539512.1924665-557-144498842229527/AnsiballZ_edpm_container_manage.py'
Jan 27 18:45:12 compute-0 sudo[96793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:13 compute-0 python3[96795]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:45:13 compute-0 podman[96832]: 2026-01-27 18:45:13.287179372 +0000 UTC m=+0.070176130 container create 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:45:13 compute-0 podman[96832]: 2026-01-27 18:45:13.253847181 +0000 UTC m=+0.036844029 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 18:45:13 compute-0 python3[96795]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 18:45:13 compute-0 sudo[96793]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:14 compute-0 sudo[97021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buptznxhslmryygmjblnirnytplevsvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539513.7801464-565-237495482820713/AnsiballZ_stat.py'
Jan 27 18:45:14 compute-0 sudo[97021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 18:45:14 compute-0 python3.9[97023]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:45:14 compute-0 sudo[97021]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:14 compute-0 sudo[97175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkjkxhvgdnfklkbwchayyikuujfcdnta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539514.5700102-574-64409841233140/AnsiballZ_file.py'
Jan 27 18:45:14 compute-0 sudo[97175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:15 compute-0 python3.9[97177]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:15 compute-0 sudo[97175]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:15 compute-0 sudo[97251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hastlzzmhuxrdvijywgakechuucsgeps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539514.5700102-574-64409841233140/AnsiballZ_stat.py'
Jan 27 18:45:15 compute-0 sudo[97251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:15 compute-0 python3.9[97253]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:45:15 compute-0 sudo[97251]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:16 compute-0 sudo[97402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufybgvdfipidwmarbhtivbnehduyeiap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539515.5294821-574-110297308480136/AnsiballZ_copy.py'
Jan 27 18:45:16 compute-0 sudo[97402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:16 compute-0 python3.9[97404]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769539515.5294821-574-110297308480136/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:16 compute-0 sudo[97402]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:16 compute-0 sudo[97478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okclxouejucoizsvewwuzvhwawhqusfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539515.5294821-574-110297308480136/AnsiballZ_systemd.py'
Jan 27 18:45:16 compute-0 sudo[97478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:16 compute-0 python3.9[97480]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:45:16 compute-0 systemd[1]: Reloading.
Jan 27 18:45:16 compute-0 systemd-rc-local-generator[97502]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:45:16 compute-0 systemd-sysv-generator[97508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:45:17 compute-0 sudo[97478]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:17 compute-0 sudo[97588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fonnwlrbgcnkywojkreqwcgfrlswknyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539515.5294821-574-110297308480136/AnsiballZ_systemd.py'
Jan 27 18:45:17 compute-0 sudo[97588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:17 compute-0 python3.9[97590]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:45:17 compute-0 systemd[1]: Reloading.
Jan 27 18:45:17 compute-0 systemd-rc-local-generator[97615]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:45:17 compute-0 systemd-sysv-generator[97620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:45:17 compute-0 systemd[1]: Starting ovn_controller container...
Jan 27 18:45:18 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 27 18:45:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26bf8ddada666f3b1932051259280a97af03df90a9e0ede32bc1df787763d88/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 27 18:45:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.
Jan 27 18:45:18 compute-0 podman[97631]: 2026-01-27 18:45:18.126219869 +0000 UTC m=+0.118747921 container init 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + sudo -E kolla_set_configs
Jan 27 18:45:18 compute-0 podman[97631]: 2026-01-27 18:45:18.151754494 +0000 UTC m=+0.144282526 container start 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:45:18 compute-0 edpm-start-podman-container[97631]: ovn_controller
Jan 27 18:45:18 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 27 18:45:18 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 27 18:45:18 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 27 18:45:18 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 27 18:45:18 compute-0 edpm-start-podman-container[97630]: Creating additional drop-in dependency for "ovn_controller" (94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b)
Jan 27 18:45:18 compute-0 systemd[97684]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 27 18:45:18 compute-0 podman[97654]: 2026-01-27 18:45:18.232546721 +0000 UTC m=+0.067473778 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 18:45:18 compute-0 systemd[1]: 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b-60b457b2860350d.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:45:18 compute-0 systemd[1]: 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b-60b457b2860350d.service: Failed with result 'exit-code'.
Jan 27 18:45:18 compute-0 systemd[1]: Reloading.
Jan 27 18:45:18 compute-0 systemd-rc-local-generator[97730]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:45:18 compute-0 systemd-sysv-generator[97734]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:45:18 compute-0 systemd[97684]: Queued start job for default target Main User Target.
Jan 27 18:45:18 compute-0 systemd[97684]: Created slice User Application Slice.
Jan 27 18:45:18 compute-0 systemd[97684]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 27 18:45:18 compute-0 systemd[97684]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 18:45:18 compute-0 systemd[97684]: Reached target Paths.
Jan 27 18:45:18 compute-0 systemd[97684]: Reached target Timers.
Jan 27 18:45:18 compute-0 systemd[97684]: Starting D-Bus User Message Bus Socket...
Jan 27 18:45:18 compute-0 systemd[97684]: Starting Create User's Volatile Files and Directories...
Jan 27 18:45:18 compute-0 systemd[97684]: Finished Create User's Volatile Files and Directories.
Jan 27 18:45:18 compute-0 systemd[97684]: Listening on D-Bus User Message Bus Socket.
Jan 27 18:45:18 compute-0 systemd[97684]: Reached target Sockets.
Jan 27 18:45:18 compute-0 systemd[97684]: Reached target Basic System.
Jan 27 18:45:18 compute-0 systemd[97684]: Reached target Main User Target.
Jan 27 18:45:18 compute-0 systemd[97684]: Startup finished in 121ms.
Jan 27 18:45:18 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 27 18:45:18 compute-0 systemd[1]: Started ovn_controller container.
Jan 27 18:45:18 compute-0 systemd[1]: Started Session c1 of User root.
Jan 27 18:45:18 compute-0 sudo[97588]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:18 compute-0 ovn_controller[97647]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 18:45:18 compute-0 ovn_controller[97647]: INFO:__main__:Validating config file
Jan 27 18:45:18 compute-0 ovn_controller[97647]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 18:45:18 compute-0 ovn_controller[97647]: INFO:__main__:Writing out command to execute
Jan 27 18:45:18 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 27 18:45:18 compute-0 ovn_controller[97647]: ++ cat /run_command
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + ARGS=
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + sudo kolla_copy_cacerts
Jan 27 18:45:18 compute-0 systemd[1]: Started Session c2 of User root.
Jan 27 18:45:18 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + [[ ! -n '' ]]
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + . kolla_extend_start
Jan 27 18:45:18 compute-0 ovn_controller[97647]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + umask 0022
Jan 27 18:45:18 compute-0 ovn_controller[97647]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <info>  [1769539518.6160] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <info>  [1769539518.6166] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <warn>  [1769539518.6168] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <info>  [1769539518.6173] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <info>  [1769539518.6176] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <info>  [1769539518.6178] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 18:45:18 compute-0 kernel: br-int: entered promiscuous mode
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 18:45:18 compute-0 ovn_controller[97647]: 2026-01-27T18:45:18Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <info>  [1769539518.6392] manager: (ovn-e7fcb6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 27 18:45:18 compute-0 systemd-udevd[97778]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 18:45:18 compute-0 systemd-udevd[97780]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 18:45:18 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <info>  [1769539518.6682] device (genev_sys_6081): carrier: link connected
Jan 27 18:45:18 compute-0 NetworkManager[56191]: <info>  [1769539518.6686] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 27 18:45:19 compute-0 python3.9[97908]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 18:45:20 compute-0 sudo[98058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpzylefcnontbarjxrcwnmupzvakcbfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539520.3171473-619-245558824391599/AnsiballZ_stat.py'
Jan 27 18:45:20 compute-0 sudo[98058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:20 compute-0 python3.9[98060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:20 compute-0 sudo[98058]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:21 compute-0 sudo[98181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdkiyjfwywldpoomukzgwoonzhmedqvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539520.3171473-619-245558824391599/AnsiballZ_copy.py'
Jan 27 18:45:21 compute-0 sudo[98181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:21 compute-0 python3.9[98183]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539520.3171473-619-245558824391599/.source.yaml _original_basename=.uidp7fa9 follow=False checksum=d66aa1ab3e6c8e00ec702f408492fa7f4e2b0fa3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:21 compute-0 sudo[98181]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:21 compute-0 sudo[98333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvxgzsvzxjbhwslyoxlcblzfovetiejm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539521.5641341-634-189607559450892/AnsiballZ_command.py'
Jan 27 18:45:21 compute-0 sudo[98333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:22 compute-0 python3.9[98335]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:45:22 compute-0 ovs-vsctl[98336]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 27 18:45:22 compute-0 sudo[98333]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:22 compute-0 sudo[98486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjscognsictsbamqzvrxhngyaallplqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539522.314772-642-145803969546709/AnsiballZ_command.py'
Jan 27 18:45:22 compute-0 sudo[98486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:23 compute-0 python3.9[98488]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:45:23 compute-0 ovs-vsctl[98490]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 27 18:45:23 compute-0 sudo[98486]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:23 compute-0 sudo[98641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdhnqplibtvyhlgwjglxjhhpypzhvgwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539523.3943355-656-113830734778068/AnsiballZ_command.py'
Jan 27 18:45:23 compute-0 sudo[98641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:23 compute-0 python3.9[98643]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:45:23 compute-0 ovs-vsctl[98644]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 27 18:45:23 compute-0 sudo[98641]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:24 compute-0 sshd-session[87166]: Connection closed by 192.168.122.31 port 34624
Jan 27 18:45:24 compute-0 sshd-session[87163]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:45:24 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 27 18:45:24 compute-0 systemd[1]: session-19.scope: Consumed 49.950s CPU time.
Jan 27 18:45:24 compute-0 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Jan 27 18:45:24 compute-0 systemd-logind[795]: Removed session 19.
Jan 27 18:45:28 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 27 18:45:28 compute-0 systemd[97684]: Activating special unit Exit the Session...
Jan 27 18:45:28 compute-0 systemd[97684]: Stopped target Main User Target.
Jan 27 18:45:28 compute-0 systemd[97684]: Stopped target Basic System.
Jan 27 18:45:28 compute-0 systemd[97684]: Stopped target Paths.
Jan 27 18:45:28 compute-0 systemd[97684]: Stopped target Sockets.
Jan 27 18:45:28 compute-0 systemd[97684]: Stopped target Timers.
Jan 27 18:45:28 compute-0 systemd[97684]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 27 18:45:28 compute-0 systemd[97684]: Closed D-Bus User Message Bus Socket.
Jan 27 18:45:28 compute-0 systemd[97684]: Stopped Create User's Volatile Files and Directories.
Jan 27 18:45:28 compute-0 systemd[97684]: Removed slice User Application Slice.
Jan 27 18:45:28 compute-0 systemd[97684]: Reached target Shutdown.
Jan 27 18:45:28 compute-0 systemd[97684]: Finished Exit the Session.
Jan 27 18:45:28 compute-0 systemd[97684]: Reached target Exit the Session.
Jan 27 18:45:28 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 27 18:45:28 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 27 18:45:28 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 27 18:45:28 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 27 18:45:28 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 27 18:45:28 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 27 18:45:28 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 27 18:45:30 compute-0 sshd-session[98672]: Accepted publickey for zuul from 192.168.122.31 port 56124 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:45:30 compute-0 systemd-logind[795]: New session 21 of user zuul.
Jan 27 18:45:30 compute-0 systemd[1]: Started Session 21 of User zuul.
Jan 27 18:45:30 compute-0 sshd-session[98672]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:45:31 compute-0 python3.9[98825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:45:32 compute-0 sudo[98979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipgnkjeytbbezbsnrhnsunfwkrskznpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539532.353053-29-224750902172552/AnsiballZ_file.py'
Jan 27 18:45:32 compute-0 sudo[98979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:32 compute-0 python3.9[98981]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:33 compute-0 sudo[98979]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:33 compute-0 sudo[99131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjdzautbhhyjbaxjanhrgwfpeptuzypi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539533.1580315-29-227698462568001/AnsiballZ_file.py'
Jan 27 18:45:33 compute-0 sudo[99131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:33 compute-0 python3.9[99133]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:33 compute-0 sudo[99131]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:34 compute-0 sudo[99283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfabherghkoofjjhewiyhtyfvmbdmuoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539533.8639624-29-145411022178611/AnsiballZ_file.py'
Jan 27 18:45:34 compute-0 sudo[99283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:34 compute-0 python3.9[99285]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:34 compute-0 sudo[99283]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:34 compute-0 sudo[99435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-botpprgwyqtjjnvdqgethoyxziyguqmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539534.6180816-29-147132944550522/AnsiballZ_file.py'
Jan 27 18:45:34 compute-0 sudo[99435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:35 compute-0 python3.9[99437]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:35 compute-0 sudo[99435]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:35 compute-0 sudo[99587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlckodmmuyaswmxbddbqzwrseppldmmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539535.2289565-29-170275082952390/AnsiballZ_file.py'
Jan 27 18:45:35 compute-0 sudo[99587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:35 compute-0 python3.9[99589]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:35 compute-0 sudo[99587]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:36 compute-0 python3.9[99739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:45:37 compute-0 sudo[99890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huhlowcsvfymtdrhidqvlfqauxupdvue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539536.8792908-73-251990346550100/AnsiballZ_seboolean.py'
Jan 27 18:45:37 compute-0 sudo[99890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:37 compute-0 python3.9[99892]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 18:45:38 compute-0 sudo[99890]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:39 compute-0 python3.9[100042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:39 compute-0 python3.9[100163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539538.420578-81-228349858241623/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:40 compute-0 python3.9[100313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:40 compute-0 python3.9[100434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539539.8709416-96-125719153528114/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:41 compute-0 sudo[100584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaoexdjnwpxoktkfpqjbekiarjeltfbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539541.218293-113-34688133426145/AnsiballZ_setup.py'
Jan 27 18:45:41 compute-0 sudo[100584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:41 compute-0 python3.9[100586]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:45:42 compute-0 sudo[100584]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:42 compute-0 sudo[100668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckzaaksheoqczxsbuvdlookoqsafxnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539541.218293-113-34688133426145/AnsiballZ_dnf.py'
Jan 27 18:45:42 compute-0 sudo[100668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:42 compute-0 python3.9[100670]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:45:44 compute-0 sudo[100668]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:44 compute-0 sudo[100821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvbvtssdlhrrymcbdrpnaojfqpswlcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539544.3474762-125-83874695999220/AnsiballZ_systemd.py'
Jan 27 18:45:44 compute-0 sudo[100821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:45 compute-0 python3.9[100823]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 18:45:46 compute-0 sudo[100821]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:46 compute-0 python3.9[100976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:47 compute-0 python3.9[101097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539546.443759-133-235579008355288/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:48 compute-0 python3.9[101247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:48 compute-0 ovn_controller[97647]: 2026-01-27T18:45:48Z|00025|memory|INFO|16256 kB peak resident set size after 30.1 seconds
Jan 27 18:45:48 compute-0 ovn_controller[97647]: 2026-01-27T18:45:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 27 18:45:48 compute-0 podman[101342]: 2026-01-27 18:45:48.76241434 +0000 UTC m=+0.105221258 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:45:48 compute-0 python3.9[101377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539547.680125-133-27912548762003/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:50 compute-0 python3.9[101545]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:50 compute-0 python3.9[101666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539549.7413166-177-109810288679530/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:51 compute-0 python3.9[101816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:51 compute-0 python3.9[101937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539550.9000158-177-7831787404798/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:52 compute-0 python3.9[102087]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:45:53 compute-0 sudo[102239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lezgiqgkvimypjrhkzkprnuyqmspbcnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539552.7817512-215-240988801202661/AnsiballZ_file.py'
Jan 27 18:45:53 compute-0 sudo[102239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:53 compute-0 python3.9[102241]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:53 compute-0 sudo[102239]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:53 compute-0 sudo[102391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adiuuasvykzvospctgmvlrdtkyanasuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539553.5869517-223-221128930985343/AnsiballZ_stat.py'
Jan 27 18:45:53 compute-0 sudo[102391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:54 compute-0 python3.9[102393]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:54 compute-0 sudo[102391]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:54 compute-0 sudo[102469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iftlhffqlourlwckhglnzkeuemcipmjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539553.5869517-223-221128930985343/AnsiballZ_file.py'
Jan 27 18:45:54 compute-0 sudo[102469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:54 compute-0 python3.9[102471]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:54 compute-0 sudo[102469]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:54 compute-0 sudo[102621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scesxjpqyukcxxabjyhgmdfraakxclhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539554.6888738-223-123724485602734/AnsiballZ_stat.py'
Jan 27 18:45:54 compute-0 sudo[102621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:55 compute-0 python3.9[102623]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:55 compute-0 sudo[102621]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:55 compute-0 sudo[102699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdriyqxroytvepvkyzibfjbjnqpwmdml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539554.6888738-223-123724485602734/AnsiballZ_file.py'
Jan 27 18:45:55 compute-0 sudo[102699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:55 compute-0 python3.9[102701]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:45:55 compute-0 sudo[102699]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:56 compute-0 sudo[102851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agleehfnroahdzpybipelepsfymfldjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539555.753604-246-116317527310992/AnsiballZ_file.py'
Jan 27 18:45:56 compute-0 sudo[102851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:56 compute-0 python3.9[102853]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:56 compute-0 sudo[102851]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:56 compute-0 sudo[103003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyihmwmcirvihutxzouasrbygzhqpstt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539556.4218736-254-217150458299473/AnsiballZ_stat.py'
Jan 27 18:45:56 compute-0 sudo[103003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:56 compute-0 python3.9[103005]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:56 compute-0 sudo[103003]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:57 compute-0 sudo[103081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thwtvlcpojoaaxrzdyjkklxeismxirhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539556.4218736-254-217150458299473/AnsiballZ_file.py'
Jan 27 18:45:57 compute-0 sudo[103081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:57 compute-0 python3.9[103083]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:57 compute-0 sudo[103081]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:57 compute-0 sudo[103233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wftydeskstwkmykxilpqwqjevpwkklny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539557.4796815-266-68231531490860/AnsiballZ_stat.py'
Jan 27 18:45:57 compute-0 sudo[103233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:57 compute-0 python3.9[103235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:45:57 compute-0 sudo[103233]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:58 compute-0 sudo[103311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojcbbbjltrdmejcvovmoocdxxgmglnip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539557.4796815-266-68231531490860/AnsiballZ_file.py'
Jan 27 18:45:58 compute-0 sudo[103311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:58 compute-0 python3.9[103313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:45:58 compute-0 sudo[103311]: pam_unix(sudo:session): session closed for user root
Jan 27 18:45:58 compute-0 sudo[103463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuhmjtnkxnshmjsjkwbxhbnpbhkftbrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539558.7175941-278-200324498381336/AnsiballZ_systemd.py'
Jan 27 18:45:58 compute-0 sudo[103463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:45:59 compute-0 python3.9[103465]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:45:59 compute-0 systemd[1]: Reloading.
Jan 27 18:45:59 compute-0 systemd-sysv-generator[103498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:45:59 compute-0 systemd-rc-local-generator[103493]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:45:59 compute-0 sudo[103463]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:00 compute-0 sudo[103653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crnccmrfefkbcapqqxbxwwwbipdjxwum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539559.8121457-286-101720372124532/AnsiballZ_stat.py'
Jan 27 18:46:00 compute-0 sudo[103653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:00 compute-0 python3.9[103655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:46:00 compute-0 sudo[103653]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:00 compute-0 sudo[103731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtkeukenjszeozwxsodioximcogwanhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539559.8121457-286-101720372124532/AnsiballZ_file.py'
Jan 27 18:46:00 compute-0 sudo[103731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:00 compute-0 python3.9[103733]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:00 compute-0 sudo[103731]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:01 compute-0 sudo[103883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsacrvhuwaxvzcoyjwxwbyjjohoxgpmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539561.046471-298-126581458619431/AnsiballZ_stat.py'
Jan 27 18:46:01 compute-0 sudo[103883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:01 compute-0 python3.9[103885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:46:01 compute-0 sudo[103883]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:01 compute-0 sudo[103961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slkonfkvbwqjcnwingrokttlcrkqrmta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539561.046471-298-126581458619431/AnsiballZ_file.py'
Jan 27 18:46:01 compute-0 sudo[103961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:02 compute-0 python3.9[103963]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:02 compute-0 sudo[103961]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:02 compute-0 sudo[104113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtbfivhrwcbraaxqfecacvrkjnnzkasf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539562.226143-310-230448743643029/AnsiballZ_systemd.py'
Jan 27 18:46:02 compute-0 sudo[104113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:02 compute-0 python3.9[104115]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:02 compute-0 systemd[1]: Reloading.
Jan 27 18:46:02 compute-0 systemd-sysv-generator[104146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:46:02 compute-0 systemd-rc-local-generator[104140]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:46:03 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 18:46:03 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 18:46:03 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 18:46:03 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 18:46:03 compute-0 sudo[104113]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:03 compute-0 sudo[104307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twvexndnytunrrjkughkubvangermncv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539563.555865-320-213083959796171/AnsiballZ_file.py'
Jan 27 18:46:03 compute-0 sudo[104307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:04 compute-0 python3.9[104309]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:46:04 compute-0 sudo[104307]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:04 compute-0 sudo[104459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jddneirirqhxbqvaqfaldnithupejjzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539564.2148728-328-278271917166082/AnsiballZ_stat.py'
Jan 27 18:46:04 compute-0 sudo[104459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:04 compute-0 python3.9[104461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:46:04 compute-0 sudo[104459]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:04 compute-0 sudo[104582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhzhuqonrfsridrtdoxiwynybqyafhim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539564.2148728-328-278271917166082/AnsiballZ_copy.py'
Jan 27 18:46:04 compute-0 sudo[104582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:05 compute-0 python3.9[104584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769539564.2148728-328-278271917166082/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:46:05 compute-0 sudo[104582]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:05 compute-0 sudo[104734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmvaxmcztukcxizmyieepxmvdeodgrtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539565.5117707-345-88338205839979/AnsiballZ_file.py'
Jan 27 18:46:05 compute-0 sudo[104734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:06 compute-0 python3.9[104736]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:06 compute-0 sudo[104734]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:06 compute-0 sudo[104886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruesifnqbnkxqmpczxgmcleycedthcff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539566.2120347-353-236987544431780/AnsiballZ_file.py'
Jan 27 18:46:06 compute-0 sudo[104886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:06 compute-0 python3.9[104888]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:46:06 compute-0 sudo[104886]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:07 compute-0 sudo[105038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthjogpwxomqtpatcaklsocmxlwehhva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539566.8838408-361-178404042216722/AnsiballZ_stat.py'
Jan 27 18:46:07 compute-0 sudo[105038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:07 compute-0 python3.9[105040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:46:07 compute-0 sudo[105038]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:07 compute-0 sudo[105161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isybfixbfrrhgwggnbtuzoyskqzlvhmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539566.8838408-361-178404042216722/AnsiballZ_copy.py'
Jan 27 18:46:07 compute-0 sudo[105161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:07 compute-0 python3.9[105163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539566.8838408-361-178404042216722/.source.json _original_basename=.kmf1714w follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:07 compute-0 sudo[105161]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:08 compute-0 python3.9[105313]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:10 compute-0 sudo[105734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuuiddpnpklimnknzhnkgppphuuyydny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539570.3835728-401-279604922180000/AnsiballZ_container_config_data.py'
Jan 27 18:46:10 compute-0 sudo[105734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:11 compute-0 python3.9[105736]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 27 18:46:11 compute-0 sudo[105734]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:11 compute-0 sudo[105886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyogiawacwcwwgxgcxvzuclwabbfdlog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539571.4064407-412-115452526730378/AnsiballZ_container_config_hash.py'
Jan 27 18:46:11 compute-0 sudo[105886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:12 compute-0 python3.9[105888]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:46:12 compute-0 sudo[105886]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:13 compute-0 sudo[106038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjyatlxuguygwpjiqgxvspurtczuzjua ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769539572.5222752-422-82821651246661/AnsiballZ_edpm_container_manage.py'
Jan 27 18:46:13 compute-0 sudo[106038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:13 compute-0 python3[106040]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:46:13 compute-0 podman[106077]: 2026-01-27 18:46:13.55986444 +0000 UTC m=+0.069141383 container create fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 18:46:13 compute-0 podman[106077]: 2026-01-27 18:46:13.529246809 +0000 UTC m=+0.038523782 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 18:46:13 compute-0 python3[106040]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 18:46:13 compute-0 sudo[106038]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:14 compute-0 sudo[106266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrrxjkwdehgmyqpvlwzfetaxfinpmafm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539573.8979318-430-108003740496970/AnsiballZ_stat.py'
Jan 27 18:46:14 compute-0 sudo[106266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:14 compute-0 python3.9[106268]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:46:14 compute-0 sudo[106266]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:14 compute-0 sudo[106420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnnbmbowerhwwggfgfedetmqkkhcewc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539574.6776395-439-137209397444808/AnsiballZ_file.py'
Jan 27 18:46:14 compute-0 sudo[106420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:15 compute-0 python3.9[106422]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:15 compute-0 sudo[106420]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:15 compute-0 sudo[106496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfykjtvorqammjbozifnydoyfejnbhii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539574.6776395-439-137209397444808/AnsiballZ_stat.py'
Jan 27 18:46:15 compute-0 sudo[106496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:15 compute-0 python3.9[106498]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:46:15 compute-0 sudo[106496]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:16 compute-0 sudo[106647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwkzsenotbojaufdpvrjdljocdyqtfhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539575.7604883-439-212744280281605/AnsiballZ_copy.py'
Jan 27 18:46:16 compute-0 sudo[106647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:16 compute-0 python3.9[106649]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769539575.7604883-439-212744280281605/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:16 compute-0 sudo[106647]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:16 compute-0 sudo[106723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgwajcdbfqlvqsldjxdezebuxllzzxnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539575.7604883-439-212744280281605/AnsiballZ_systemd.py'
Jan 27 18:46:16 compute-0 sudo[106723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:17 compute-0 python3.9[106725]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:46:17 compute-0 systemd[1]: Reloading.
Jan 27 18:46:17 compute-0 systemd-sysv-generator[106753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:46:17 compute-0 systemd-rc-local-generator[106749]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:46:17 compute-0 sudo[106723]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:17 compute-0 sudo[106834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecrbbucqgdpliytqzfwwdijoxysijiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539575.7604883-439-212744280281605/AnsiballZ_systemd.py'
Jan 27 18:46:17 compute-0 sudo[106834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:17 compute-0 python3.9[106836]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:17 compute-0 systemd[1]: Reloading.
Jan 27 18:46:18 compute-0 systemd-rc-local-generator[106865]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:46:18 compute-0 systemd-sysv-generator[106868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:46:18 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 27 18:46:18 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b807a2ade4f56cd4f08e3abfbb062c6d37be73b7790f1a28106824fceaba3960/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 27 18:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b807a2ade4f56cd4f08e3abfbb062c6d37be73b7790f1a28106824fceaba3960/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 18:46:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.
Jan 27 18:46:18 compute-0 podman[106877]: 2026-01-27 18:46:18.334032621 +0000 UTC m=+0.149636271 container init fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + sudo -E kolla_set_configs
Jan 27 18:46:18 compute-0 podman[106877]: 2026-01-27 18:46:18.367067605 +0000 UTC m=+0.182671255 container start fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 18:46:18 compute-0 edpm-start-podman-container[106877]: ovn_metadata_agent
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Validating config file
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Copying service configuration files
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Writing out command to execute
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: ++ cat /run_command
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + CMD=neutron-ovn-metadata-agent
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + ARGS=
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + sudo kolla_copy_cacerts
Jan 27 18:46:18 compute-0 edpm-start-podman-container[106876]: Creating additional drop-in dependency for "ovn_metadata_agent" (fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943)
Jan 27 18:46:18 compute-0 podman[106900]: 2026-01-27 18:46:18.45068107 +0000 UTC m=+0.068026489 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 18:46:18 compute-0 systemd[1]: Reloading.
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + [[ ! -n '' ]]
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + . kolla_extend_start
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: Running command: 'neutron-ovn-metadata-agent'
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + umask 0022
Jan 27 18:46:18 compute-0 ovn_metadata_agent[106893]: + exec neutron-ovn-metadata-agent
Jan 27 18:46:18 compute-0 systemd-rc-local-generator[106966]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:46:18 compute-0 systemd-sysv-generator[106969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:46:18 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 27 18:46:18 compute-0 sudo[106834]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:19 compute-0 podman[107058]: 2026-01-27 18:46:19.391714757 +0000 UTC m=+0.156616373 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Jan 27 18:46:19 compute-0 python3.9[107157]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.442 106898 INFO neutron.common.config [-] Logging enabled!
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.442 106898 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.442 106898 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.443 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.443 106898 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.443 106898 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.443 106898 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.443 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.443 106898 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.443 106898 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.443 106898 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.444 106898 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.445 106898 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.446 106898 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.446 106898 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.446 106898 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.446 106898 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.446 106898 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.446 106898 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.446 106898 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.446 106898 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.447 106898 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.448 106898 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.449 106898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.450 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.451 106898 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.452 106898 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.453 106898 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.454 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.455 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.456 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.457 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.458 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.459 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.460 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.461 106898 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.462 106898 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.463 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.464 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.465 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.466 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.467 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.468 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.469 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.470 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.471 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.472 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.473 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.474 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.474 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.474 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.474 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.474 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.474 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.474 106898 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.474 106898 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.483 106898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.483 106898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.483 106898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.483 106898 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.483 106898 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.494 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d3b19c13-a2f4-422f-8fa1-01ce64dc0c58 (UUID: d3b19c13-a2f4-422f-8fa1-01ce64dc0c58) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.522 106898 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.522 106898 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.522 106898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.522 106898 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.525 106898 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.530 106898 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.536 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd3b19c13-a2f4-422f-8fa1-01ce64dc0c58'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], external_ids={}, name=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, nb_cfg_timestamp=1769539526644, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.537 106898 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f7157a14130>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.537 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.537 106898 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.538 106898 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.538 106898 INFO oslo_service.service [-] Starting 1 workers
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.542 106898 DEBUG oslo_service.service [-] Started child 107247 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.545 106898 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp7gb2qejr/privsep.sock']
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.546 107247 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-432699'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.571 107247 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.571 107247 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.571 107247 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.575 107247 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.581 107247 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 18:46:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:20.588 107247 INFO eventlet.wsgi.server [-] (107247) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 27 18:46:20 compute-0 sudo[107311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxbklpxyihxfvojfiqyjhmdzlwnntvfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539580.4157116-484-245114023684466/AnsiballZ_stat.py'
Jan 27 18:46:20 compute-0 sudo[107311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:20 compute-0 python3.9[107313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:46:20 compute-0 sudo[107311]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:21 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.279 106898 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.280 106898 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp7gb2qejr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.104 107353 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.112 107353 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.117 107353 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.117 107353 INFO oslo.privsep.daemon [-] privsep daemon running as pid 107353
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.283 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[5576bd48-843a-461e-b4c4-a7ae3e3873fc]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 18:46:21 compute-0 sudo[107441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmkernbavqmujbzbmapshhvzzuyygnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539580.4157116-484-245114023684466/AnsiballZ_copy.py'
Jan 27 18:46:21 compute-0 sudo[107441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:21 compute-0 python3.9[107443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539580.4157116-484-245114023684466/.source.yaml _original_basename=.s_9bofsq follow=False checksum=e09db81264829469a1096c600d44f45925e1c107 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:21 compute-0 sudo[107441]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.789 107353 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.789 107353 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:46:21 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:21.789 107353 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:46:21 compute-0 sshd-session[98675]: Connection closed by 192.168.122.31 port 56124
Jan 27 18:46:21 compute-0 sshd-session[98672]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:46:21 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Jan 27 18:46:21 compute-0 systemd[1]: session-21.scope: Consumed 37.502s CPU time.
Jan 27 18:46:21 compute-0 systemd-logind[795]: Session 21 logged out. Waiting for processes to exit.
Jan 27 18:46:21 compute-0 systemd-logind[795]: Removed session 21.
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.406 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[cb24b668-38ea-42a6-83a7-0f4cebc43d1e]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.410 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, column=external_ids, values=({'neutron:ovn-metadata-id': '7a75283a-978a-53ee-831f-ae6675dea4bf'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.419 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.426 106898 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.426 106898 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.426 106898 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.426 106898 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.427 106898 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.427 106898 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.427 106898 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.428 106898 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.428 106898 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.428 106898 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.429 106898 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.429 106898 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.429 106898 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.429 106898 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.430 106898 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.430 106898 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.431 106898 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.431 106898 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.431 106898 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.431 106898 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.432 106898 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.432 106898 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.432 106898 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.433 106898 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.433 106898 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.433 106898 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.434 106898 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.434 106898 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.434 106898 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.434 106898 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.435 106898 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.435 106898 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.435 106898 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.436 106898 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.436 106898 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.436 106898 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.436 106898 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.437 106898 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.437 106898 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.438 106898 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.438 106898 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.438 106898 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.438 106898 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.439 106898 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.439 106898 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.439 106898 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.439 106898 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.440 106898 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.440 106898 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.440 106898 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.440 106898 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.441 106898 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.441 106898 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.441 106898 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.441 106898 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.442 106898 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.442 106898 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.442 106898 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.443 106898 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.443 106898 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.443 106898 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.443 106898 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.444 106898 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.444 106898 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.444 106898 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.445 106898 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.445 106898 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.445 106898 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.446 106898 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.446 106898 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.446 106898 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.447 106898 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.447 106898 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.448 106898 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.448 106898 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.448 106898 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.449 106898 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.449 106898 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.449 106898 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.450 106898 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.450 106898 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.450 106898 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.451 106898 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.451 106898 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.451 106898 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.452 106898 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.452 106898 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.452 106898 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.453 106898 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.453 106898 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.453 106898 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.454 106898 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.454 106898 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.454 106898 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.455 106898 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.455 106898 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.455 106898 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.456 106898 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.456 106898 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.456 106898 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.456 106898 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.457 106898 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.457 106898 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.458 106898 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.458 106898 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.458 106898 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.458 106898 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.459 106898 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.459 106898 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.460 106898 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.460 106898 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.460 106898 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.461 106898 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.461 106898 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.461 106898 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.462 106898 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.462 106898 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.462 106898 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.463 106898 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.463 106898 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.464 106898 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.464 106898 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.464 106898 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.465 106898 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.465 106898 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.465 106898 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.466 106898 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.466 106898 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.466 106898 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.467 106898 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.467 106898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.467 106898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.468 106898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.468 106898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.468 106898 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.469 106898 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.469 106898 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.469 106898 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.469 106898 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.470 106898 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.470 106898 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.470 106898 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.470 106898 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.470 106898 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.470 106898 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.471 106898 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.471 106898 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.471 106898 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.471 106898 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.471 106898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.471 106898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.471 106898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.472 106898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.472 106898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.472 106898 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.472 106898 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.472 106898 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.472 106898 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.473 106898 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.473 106898 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.473 106898 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.473 106898 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.473 106898 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.473 106898 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.474 106898 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.474 106898 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.474 106898 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.474 106898 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.474 106898 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.474 106898 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.474 106898 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.475 106898 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.475 106898 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.475 106898 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.475 106898 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.475 106898 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.475 106898 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.475 106898 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.476 106898 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.476 106898 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.476 106898 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.476 106898 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.476 106898 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.476 106898 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.477 106898 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.477 106898 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.477 106898 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.477 106898 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.477 106898 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.477 106898 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.478 106898 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.478 106898 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.478 106898 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.478 106898 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.478 106898 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.478 106898 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.478 106898 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.479 106898 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.479 106898 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.479 106898 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.479 106898 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.479 106898 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.479 106898 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.480 106898 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.480 106898 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.480 106898 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.480 106898 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.480 106898 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.480 106898 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.480 106898 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.480 106898 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.481 106898 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.481 106898 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.481 106898 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.481 106898 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.481 106898 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.481 106898 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.482 106898 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.482 106898 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.482 106898 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.482 106898 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.482 106898 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.482 106898 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.482 106898 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.483 106898 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.483 106898 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.483 106898 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.483 106898 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.483 106898 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.483 106898 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.483 106898 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.483 106898 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.484 106898 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.484 106898 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.484 106898 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.484 106898 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.484 106898 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.484 106898 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.485 106898 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.485 106898 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.485 106898 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.485 106898 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.485 106898 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.485 106898 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.486 106898 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.486 106898 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.486 106898 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.486 106898 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.486 106898 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.486 106898 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.486 106898 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.487 106898 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.487 106898 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.487 106898 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.487 106898 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.487 106898 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.487 106898 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.487 106898 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.488 106898 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.488 106898 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.488 106898 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.488 106898 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.488 106898 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.488 106898 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.489 106898 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.489 106898 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.489 106898 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.489 106898 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.489 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.489 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.490 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.490 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.490 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.490 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.490 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.490 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.490 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.491 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.491 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.491 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.491 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.491 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.491 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.492 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.492 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.492 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.492 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.492 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.492 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.492 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.493 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.493 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.493 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.493 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.493 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.493 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.494 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.494 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.494 106898 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.494 106898 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.494 106898 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.494 106898 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.495 106898 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:46:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:46:22.495 106898 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 18:46:29 compute-0 sshd-session[107469]: Accepted publickey for zuul from 192.168.122.31 port 36450 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:46:29 compute-0 systemd-logind[795]: New session 22 of user zuul.
Jan 27 18:46:29 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 27 18:46:29 compute-0 sshd-session[107469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:46:30 compute-0 python3.9[107622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:46:31 compute-0 sudo[107776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vquxybwotmfsexphodtngaqnaxgvyxsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539590.887619-29-42358580905707/AnsiballZ_command.py'
Jan 27 18:46:31 compute-0 sudo[107776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:31 compute-0 python3.9[107778]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:46:31 compute-0 sudo[107776]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:32 compute-0 sudo[107941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeplfufmukvjgescfrchnjakdxvsfwdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539591.9542966-40-177836288434408/AnsiballZ_systemd_service.py'
Jan 27 18:46:32 compute-0 sudo[107941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:32 compute-0 python3.9[107943]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:46:32 compute-0 systemd[1]: Reloading.
Jan 27 18:46:33 compute-0 systemd-rc-local-generator[107965]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:46:33 compute-0 systemd-sysv-generator[107971]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:46:33 compute-0 sudo[107941]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:33 compute-0 python3.9[108129]: ansible-ansible.builtin.service_facts Invoked
Jan 27 18:46:34 compute-0 network[108146]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 18:46:34 compute-0 network[108147]: 'network-scripts' will be removed from distribution in near future.
Jan 27 18:46:34 compute-0 network[108148]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 18:46:38 compute-0 sudo[108407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyfaemsbfalaydtduklowckiazmscjfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539597.6440299-59-225891815645009/AnsiballZ_systemd_service.py'
Jan 27 18:46:38 compute-0 sudo[108407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:38 compute-0 python3.9[108409]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:38 compute-0 sudo[108407]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:38 compute-0 sudo[108560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfmpkwmaylbruhuzatudewdmmjgdhozw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539598.532876-59-121630451443079/AnsiballZ_systemd_service.py'
Jan 27 18:46:38 compute-0 sudo[108560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:39 compute-0 python3.9[108562]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:39 compute-0 sudo[108560]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:39 compute-0 sudo[108713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttenxlpjnuyidolwglbbhgfssrgwtevw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539599.3404968-59-223409039162737/AnsiballZ_systemd_service.py'
Jan 27 18:46:39 compute-0 sudo[108713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:40 compute-0 python3.9[108715]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:40 compute-0 sudo[108713]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:40 compute-0 sudo[108866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fusplkofcnwmxrwsxgyafblsixkfkavv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539600.216851-59-223211817235110/AnsiballZ_systemd_service.py'
Jan 27 18:46:40 compute-0 sudo[108866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:40 compute-0 python3.9[108868]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:40 compute-0 sudo[108866]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:41 compute-0 sudo[109019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwftdzvhtbcqpsfwjbirciptudsenscw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539601.0982382-59-127267039110461/AnsiballZ_systemd_service.py'
Jan 27 18:46:41 compute-0 sudo[109019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:41 compute-0 python3.9[109021]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:41 compute-0 sudo[109019]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:42 compute-0 sudo[109172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqhpzmfjbtpuqqegfvzucbcqlgfgsngq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539601.9079535-59-3035764631163/AnsiballZ_systemd_service.py'
Jan 27 18:46:42 compute-0 sudo[109172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:42 compute-0 python3.9[109174]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:42 compute-0 sudo[109172]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:42 compute-0 sudo[109325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcbldexsacinvftrvyfkkimkmiiyiuab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539602.660834-59-33165889953291/AnsiballZ_systemd_service.py'
Jan 27 18:46:42 compute-0 sudo[109325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:43 compute-0 python3.9[109327]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:46:43 compute-0 sudo[109325]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:44 compute-0 sudo[109478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgaetbpopjrjvprvsbciqpuethgkfhgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539603.6363297-111-106712342226934/AnsiballZ_file.py'
Jan 27 18:46:44 compute-0 sudo[109478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:44 compute-0 python3.9[109480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:44 compute-0 sudo[109478]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:44 compute-0 sudo[109630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioflzjgifzwlulehdvlapcemedgcmrmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539604.521752-111-99097846586824/AnsiballZ_file.py'
Jan 27 18:46:44 compute-0 sudo[109630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:45 compute-0 python3.9[109632]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:45 compute-0 sudo[109630]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:45 compute-0 sudo[109782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntilhwxblgmjcwnoflztocvrzwvkvomo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539605.1938608-111-169504925919587/AnsiballZ_file.py'
Jan 27 18:46:45 compute-0 sudo[109782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:45 compute-0 python3.9[109784]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:45 compute-0 sudo[109782]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:46 compute-0 sudo[109934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbjzmvuncttiwpgbtlfzvjllypdfqkei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539605.8751025-111-143822594380374/AnsiballZ_file.py'
Jan 27 18:46:46 compute-0 sudo[109934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:46 compute-0 python3.9[109936]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:46 compute-0 sudo[109934]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:46 compute-0 sudo[110086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqdrlgujcgawvbtyhwitlfdujtrkgrmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539606.568805-111-222097982635577/AnsiballZ_file.py'
Jan 27 18:46:46 compute-0 sudo[110086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:47 compute-0 python3.9[110088]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:47 compute-0 sudo[110086]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:47 compute-0 sudo[110238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khybskwakzsnrnrfyhzioktamgyethvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539607.2076273-111-109392325098530/AnsiballZ_file.py'
Jan 27 18:46:47 compute-0 sudo[110238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:47 compute-0 python3.9[110240]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:47 compute-0 sudo[110238]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:48 compute-0 sudo[110390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eytfbrdxamhztsrjlzgxosvbikurprfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539607.8943114-111-165119945769128/AnsiballZ_file.py'
Jan 27 18:46:48 compute-0 sudo[110390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:48 compute-0 python3.9[110392]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:48 compute-0 sudo[110390]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:48 compute-0 sudo[110550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kodykfslgwkseqpxvwfvaueeddawahwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539608.6278558-161-112506797293175/AnsiballZ_file.py'
Jan 27 18:46:48 compute-0 sudo[110550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:48 compute-0 podman[110516]: 2026-01-27 18:46:48.993856766 +0000 UTC m=+0.084043355 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 18:46:49 compute-0 python3.9[110556]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:49 compute-0 sudo[110550]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:49 compute-0 sudo[110723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uethkefhbfxkjmzfrfshszexuzmyvexr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539609.3450658-161-54882402388912/AnsiballZ_file.py'
Jan 27 18:46:49 compute-0 sudo[110723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:49 compute-0 podman[110686]: 2026-01-27 18:46:49.767949399 +0000 UTC m=+0.116594819 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 18:46:49 compute-0 python3.9[110733]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:49 compute-0 sudo[110723]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:50 compute-0 sudo[110891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvmsbozbtgygpubcczdngcnhuxwkdgzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539610.0754604-161-131356360088695/AnsiballZ_file.py'
Jan 27 18:46:50 compute-0 sudo[110891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:50 compute-0 python3.9[110893]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:50 compute-0 sudo[110891]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:51 compute-0 sudo[111043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vldytavirgkdrteatdnnjxvuyexjncox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539610.9086804-161-140297592624633/AnsiballZ_file.py'
Jan 27 18:46:51 compute-0 sudo[111043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:51 compute-0 python3.9[111045]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:51 compute-0 sudo[111043]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:51 compute-0 sudo[111195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmgcixxyivbzdlrecovpkmlhepqekgkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539611.6259456-161-240400823716597/AnsiballZ_file.py'
Jan 27 18:46:51 compute-0 sudo[111195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:52 compute-0 python3.9[111197]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:52 compute-0 sudo[111195]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:52 compute-0 sudo[111347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcxkustdayalpvxegxumrgskdybcveaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539612.2530322-161-97651934344752/AnsiballZ_file.py'
Jan 27 18:46:52 compute-0 sudo[111347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:52 compute-0 python3.9[111349]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:52 compute-0 sudo[111347]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:53 compute-0 sudo[111499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzrqgrknpjleskitpogyvmhuijxukoyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539612.9198873-161-26628025172045/AnsiballZ_file.py'
Jan 27 18:46:53 compute-0 sudo[111499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:53 compute-0 python3.9[111501]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:46:53 compute-0 sudo[111499]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:54 compute-0 sudo[111651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiyovldhmyovkgqutqgfhpyvxtwnlyup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539614.0548694-212-2351024075283/AnsiballZ_command.py'
Jan 27 18:46:54 compute-0 sudo[111651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:54 compute-0 python3.9[111653]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:46:54 compute-0 sudo[111651]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:55 compute-0 python3.9[111805]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 18:46:55 compute-0 sudo[111955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yleitxvwmnvwrkzbrkcckkxbspeotqgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539615.7036707-230-167299802768275/AnsiballZ_systemd_service.py'
Jan 27 18:46:55 compute-0 sudo[111955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:56 compute-0 python3.9[111957]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:46:56 compute-0 systemd[1]: Reloading.
Jan 27 18:46:56 compute-0 systemd-rc-local-generator[111985]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:46:56 compute-0 systemd-sysv-generator[111990]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:46:56 compute-0 sudo[111955]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:57 compute-0 sudo[112143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rexhtxxdbvpuvenjiwulmimvympppvpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539616.7439473-238-98233866013755/AnsiballZ_command.py'
Jan 27 18:46:57 compute-0 sudo[112143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:57 compute-0 python3.9[112145]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:46:57 compute-0 sudo[112143]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:57 compute-0 sudo[112296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckjvyohqjggjtqotjhkybcavkvbzdbcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539617.435011-238-1461770173992/AnsiballZ_command.py'
Jan 27 18:46:57 compute-0 sudo[112296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:57 compute-0 python3.9[112298]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:46:57 compute-0 sudo[112296]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:58 compute-0 sudo[112449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxybdpkhoytzfeqnolmomrzbxfyarwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539618.0753155-238-256259035505335/AnsiballZ_command.py'
Jan 27 18:46:58 compute-0 sudo[112449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:58 compute-0 python3.9[112451]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:46:58 compute-0 sudo[112449]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:59 compute-0 sudo[112602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dayuerrqrdbuhzkweycxctwqlhxznjcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539618.8230445-238-231168534641711/AnsiballZ_command.py'
Jan 27 18:46:59 compute-0 sudo[112602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:46:59 compute-0 python3.9[112604]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:46:59 compute-0 sudo[112602]: pam_unix(sudo:session): session closed for user root
Jan 27 18:46:59 compute-0 sudo[112755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajcjuibufvxlykmzkfvpmgvfpdmvgbln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539619.4891663-238-92010501370953/AnsiballZ_command.py'
Jan 27 18:46:59 compute-0 sudo[112755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:47:00 compute-0 python3.9[112757]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:47:00 compute-0 sudo[112755]: pam_unix(sudo:session): session closed for user root
Jan 27 18:47:00 compute-0 sudo[112908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfodkarwsntwidtmhptzeoidwrpvlvcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539620.3462522-238-109089004526843/AnsiballZ_command.py'
Jan 27 18:47:00 compute-0 sudo[112908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:47:00 compute-0 python3.9[112910]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:47:00 compute-0 sudo[112908]: pam_unix(sudo:session): session closed for user root
Jan 27 18:47:01 compute-0 sudo[113061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dovxzjtyfuyvppruhhpcdeedotaoouuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539621.026324-238-86147170822868/AnsiballZ_command.py'
Jan 27 18:47:01 compute-0 sudo[113061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:47:01 compute-0 python3.9[113063]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:47:01 compute-0 sudo[113061]: pam_unix(sudo:session): session closed for user root
Jan 27 18:47:02 compute-0 sudo[113214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rypiphteifbjctgmsevetjsmqjqxvuin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539621.973591-292-273732871237448/AnsiballZ_getent.py'
Jan 27 18:47:02 compute-0 sudo[113214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:47:02 compute-0 python3.9[113216]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 27 18:47:02 compute-0 sudo[113214]: pam_unix(sudo:session): session closed for user root
Jan 27 18:47:03 compute-0 sudo[113367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tossynwtbcttcntadpvtfonhaewxzoao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539622.8487313-300-242111409806749/AnsiballZ_group.py'
Jan 27 18:47:03 compute-0 sudo[113367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:47:03 compute-0 python3.9[113369]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 18:47:03 compute-0 groupadd[113370]: group added to /etc/group: name=libvirt, GID=42473
Jan 27 18:47:03 compute-0 groupadd[113370]: group added to /etc/gshadow: name=libvirt
Jan 27 18:47:03 compute-0 groupadd[113370]: new group: name=libvirt, GID=42473
Jan 27 18:47:03 compute-0 sudo[113367]: pam_unix(sudo:session): session closed for user root
Jan 27 18:47:04 compute-0 sudo[113525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njzpnwyukhitrspcrlkqppgghnyvsqga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539623.9968529-308-239624520682951/AnsiballZ_user.py'
Jan 27 18:47:04 compute-0 sudo[113525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:47:04 compute-0 python3.9[113527]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 18:47:04 compute-0 useradd[113529]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 18:47:04 compute-0 sudo[113525]: pam_unix(sudo:session): session closed for user root
Jan 27 18:47:05 compute-0 sudo[113685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlaayiyozitgywqvpzbyectvleixpgug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539625.1643543-319-89651177841714/AnsiballZ_setup.py'
Jan 27 18:47:05 compute-0 sudo[113685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:47:05 compute-0 python3.9[113687]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:47:06 compute-0 sudo[113685]: pam_unix(sudo:session): session closed for user root
Jan 27 18:47:06 compute-0 sudo[113769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkvmnxjwgfzgfloyutobxzbgxvwpkpct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539625.1643543-319-89651177841714/AnsiballZ_dnf.py'
Jan 27 18:47:06 compute-0 sudo[113769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:47:06 compute-0 python3.9[113771]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:47:08 compute-0 sshd-session[113773]: Invalid user sol from 45.148.10.240 port 60726
Jan 27 18:47:08 compute-0 sshd-session[113773]: Connection closed by invalid user sol 45.148.10.240 port 60726 [preauth]
Jan 27 18:47:19 compute-0 podman[113781]: 2026-01-27 18:47:19.341587757 +0000 UTC m=+0.101636433 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 18:47:20 compute-0 podman[113801]: 2026-01-27 18:47:20.374649597 +0000 UTC m=+0.095413991 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:47:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:47:20.478 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:47:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:47:20.478 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:47:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:47:20.478 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:47:50 compute-0 podman[113831]: 2026-01-27 18:47:50.282771985 +0000 UTC m=+0.061869168 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 18:47:51 compute-0 podman[113851]: 2026-01-27 18:47:51.307634594 +0000 UTC m=+0.087308784 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 18:48:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:48:20.480 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:48:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:48:20.480 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:48:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:48:20.480 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:48:20 compute-0 podman[113877]: 2026-01-27 18:48:20.595640837 +0000 UTC m=+0.076553170 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 18:48:22 compute-0 podman[113909]: 2026-01-27 18:48:22.359935337 +0000 UTC m=+0.135657183 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 18:48:43 compute-0 kernel: SELinux:  Converting 2763 SID table entries...
Jan 27 18:48:43 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:48:43 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 18:48:43 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:48:43 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:48:43 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:48:43 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:48:43 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:48:51 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 27 18:48:51 compute-0 podman[114107]: 2026-01-27 18:48:51.313729892 +0000 UTC m=+0.079388951 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:48:53 compute-0 podman[114126]: 2026-01-27 18:48:53.331897212 +0000 UTC m=+0.116704600 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 18:48:53 compute-0 kernel: SELinux:  Converting 2763 SID table entries...
Jan 27 18:48:53 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:48:53 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 18:48:53 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:48:53 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:48:53 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:48:53 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:48:53 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:49:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:49:20.482 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:49:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:49:20.484 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:49:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:49:20.484 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:49:22 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 27 18:49:22 compute-0 podman[123188]: 2026-01-27 18:49:22.316811434 +0000 UTC m=+0.073562669 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 18:49:24 compute-0 podman[124409]: 2026-01-27 18:49:24.334774055 +0000 UTC m=+0.101309560 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 18:49:27 compute-0 sshd-session[126132]: Invalid user sol from 45.148.10.240 port 35908
Jan 27 18:49:27 compute-0 sshd-session[126132]: Connection closed by invalid user sol 45.148.10.240 port 35908 [preauth]
Jan 27 18:49:48 compute-0 kernel: SELinux:  Converting 2764 SID table entries...
Jan 27 18:49:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 18:49:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 18:49:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 18:49:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 18:49:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 18:49:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 18:49:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 18:49:49 compute-0 groupadd[131092]: group added to /etc/group: name=dnsmasq, GID=993
Jan 27 18:49:49 compute-0 groupadd[131092]: group added to /etc/gshadow: name=dnsmasq
Jan 27 18:49:49 compute-0 groupadd[131092]: new group: name=dnsmasq, GID=993
Jan 27 18:49:49 compute-0 useradd[131099]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 27 18:49:49 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 27 18:49:49 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 27 18:49:49 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 27 18:49:50 compute-0 groupadd[131112]: group added to /etc/group: name=clevis, GID=992
Jan 27 18:49:50 compute-0 groupadd[131112]: group added to /etc/gshadow: name=clevis
Jan 27 18:49:50 compute-0 groupadd[131112]: new group: name=clevis, GID=992
Jan 27 18:49:50 compute-0 useradd[131119]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 27 18:49:50 compute-0 usermod[131129]: add 'clevis' to group 'tss'
Jan 27 18:49:50 compute-0 usermod[131129]: add 'clevis' to shadow group 'tss'
Jan 27 18:49:53 compute-0 polkitd[43613]: Reloading rules
Jan 27 18:49:53 compute-0 polkitd[43613]: Collecting garbage unconditionally...
Jan 27 18:49:53 compute-0 polkitd[43613]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 18:49:53 compute-0 polkitd[43613]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 18:49:53 compute-0 polkitd[43613]: Finished loading, compiling and executing 3 rules
Jan 27 18:49:53 compute-0 polkitd[43613]: Reloading rules
Jan 27 18:49:53 compute-0 polkitd[43613]: Collecting garbage unconditionally...
Jan 27 18:49:53 compute-0 polkitd[43613]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 18:49:53 compute-0 polkitd[43613]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 18:49:53 compute-0 polkitd[43613]: Finished loading, compiling and executing 3 rules
Jan 27 18:49:53 compute-0 podman[131157]: 2026-01-27 18:49:53.173847038 +0000 UTC m=+0.086237157 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 18:49:54 compute-0 groupadd[131338]: group added to /etc/group: name=ceph, GID=167
Jan 27 18:49:54 compute-0 groupadd[131338]: group added to /etc/gshadow: name=ceph
Jan 27 18:49:54 compute-0 groupadd[131338]: new group: name=ceph, GID=167
Jan 27 18:49:54 compute-0 useradd[131355]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 27 18:49:54 compute-0 podman[131339]: 2026-01-27 18:49:54.603503304 +0000 UTC m=+0.120494893 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 18:49:57 compute-0 sshd[1007]: Received signal 15; terminating.
Jan 27 18:49:57 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 27 18:49:57 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 27 18:49:57 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 27 18:49:57 compute-0 systemd[1]: sshd.service: Consumed 1.925s CPU time, read 32.0K from disk, written 56.0K to disk.
Jan 27 18:49:57 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 27 18:49:57 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 27 18:49:57 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 18:49:57 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 18:49:57 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 18:49:57 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 27 18:49:57 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 27 18:49:57 compute-0 sshd[131890]: Server listening on 0.0.0.0 port 22.
Jan 27 18:49:57 compute-0 sshd[131890]: Server listening on :: port 22.
Jan 27 18:49:57 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 27 18:49:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:49:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:49:59 compute-0 systemd[1]: Reloading.
Jan 27 18:50:00 compute-0 systemd-rc-local-generator[132151]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:00 compute-0 systemd-sysv-generator[132154]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:50:03 compute-0 sudo[113769]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:04 compute-0 sudo[136630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpdjgnqhqzrthgbtkeocsxpvkxwoajkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539803.477947-331-84566447804628/AnsiballZ_systemd.py'
Jan 27 18:50:04 compute-0 sudo[136630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:04 compute-0 python3.9[136662]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 18:50:04 compute-0 systemd[1]: Reloading.
Jan 27 18:50:04 compute-0 systemd-rc-local-generator[137071]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:04 compute-0 systemd-sysv-generator[137074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:04 compute-0 sudo[136630]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:05 compute-0 sudo[137904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whcpthrfjrhkiifsugtmtmkurbvdivtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539804.8283415-331-83725122769350/AnsiballZ_systemd.py'
Jan 27 18:50:05 compute-0 sudo[137904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:05 compute-0 python3.9[137929]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 18:50:05 compute-0 systemd[1]: Reloading.
Jan 27 18:50:05 compute-0 systemd-rc-local-generator[138272]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:05 compute-0 systemd-sysv-generator[138275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:05 compute-0 sudo[137904]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:06 compute-0 sudo[138999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkoabmjdsrejdmjajmzipdzpgchqhcwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539805.8934758-331-120049967127306/AnsiballZ_systemd.py'
Jan 27 18:50:06 compute-0 sudo[138999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:06 compute-0 python3.9[139020]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 18:50:06 compute-0 systemd[1]: Reloading.
Jan 27 18:50:06 compute-0 systemd-sysv-generator[139567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:06 compute-0 systemd-rc-local-generator[139562]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:06 compute-0 sudo[138999]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:07 compute-0 sudo[140317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihmjkzwohhkjdzbnnplfplujqmffbdpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539806.9540071-331-173712467797579/AnsiballZ_systemd.py'
Jan 27 18:50:07 compute-0 sudo[140317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:07 compute-0 python3.9[140334]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 18:50:07 compute-0 systemd[1]: Reloading.
Jan 27 18:50:07 compute-0 systemd-sysv-generator[140773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:07 compute-0 systemd-rc-local-generator[140770]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:07 compute-0 sudo[140317]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:08 compute-0 sudo[141435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcfdzpolkvzsjarbnaunkxvodrqusini ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539808.054335-360-254839570788986/AnsiballZ_systemd.py'
Jan 27 18:50:08 compute-0 sudo[141435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:50:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:50:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.960s CPU time.
Jan 27 18:50:08 compute-0 systemd[1]: run-r9daec86553e7423a95054581cea0a130.service: Deactivated successfully.
Jan 27 18:50:08 compute-0 python3.9[141437]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:08 compute-0 systemd[1]: Reloading.
Jan 27 18:50:08 compute-0 systemd-rc-local-generator[141469]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:08 compute-0 systemd-sysv-generator[141473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:09 compute-0 sudo[141435]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:09 compute-0 sudo[141626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgvzwtqjwjspwykptbbzgknpsnglnley ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539809.1680975-360-49826114852673/AnsiballZ_systemd.py'
Jan 27 18:50:09 compute-0 sudo[141626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:09 compute-0 python3.9[141628]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:10 compute-0 systemd[1]: Reloading.
Jan 27 18:50:10 compute-0 systemd-rc-local-generator[141657]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:10 compute-0 systemd-sysv-generator[141662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:10 compute-0 sudo[141626]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:10 compute-0 sudo[141816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztxchthprauupqdszgaaklgvekabfhwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539810.4799712-360-64231514691729/AnsiballZ_systemd.py'
Jan 27 18:50:10 compute-0 sudo[141816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:11 compute-0 python3.9[141818]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:11 compute-0 systemd[1]: Reloading.
Jan 27 18:50:11 compute-0 systemd-sysv-generator[141851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:11 compute-0 systemd-rc-local-generator[141848]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:11 compute-0 sudo[141816]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:11 compute-0 sudo[142005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvbcgjwhlndfweccvpytsfssyokjsbzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539811.704517-360-27813357997544/AnsiballZ_systemd.py'
Jan 27 18:50:12 compute-0 sudo[142005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:12 compute-0 python3.9[142007]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:12 compute-0 sudo[142005]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:12 compute-0 sudo[142160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdiokiwcyogjrfnwauspkwdfoxldlukm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539812.552802-360-9761840741686/AnsiballZ_systemd.py'
Jan 27 18:50:12 compute-0 sudo[142160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:13 compute-0 python3.9[142162]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:13 compute-0 systemd[1]: Reloading.
Jan 27 18:50:13 compute-0 systemd-sysv-generator[142198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:13 compute-0 systemd-rc-local-generator[142192]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:13 compute-0 sudo[142160]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:14 compute-0 sudo[142350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnnjjhzymspgatsrotbhqqndfmkjwrdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539813.7196805-396-250636808510472/AnsiballZ_systemd.py'
Jan 27 18:50:14 compute-0 sudo[142350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:14 compute-0 python3.9[142352]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 18:50:14 compute-0 systemd[1]: Reloading.
Jan 27 18:50:14 compute-0 systemd-rc-local-generator[142383]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:50:14 compute-0 systemd-sysv-generator[142386]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:50:14 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 27 18:50:14 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 27 18:50:14 compute-0 sudo[142350]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:15 compute-0 sudo[142543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eseklsomicredmayjyuslysjmmgajxza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539814.936901-404-119461519793553/AnsiballZ_systemd.py'
Jan 27 18:50:15 compute-0 sudo[142543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:15 compute-0 python3.9[142545]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:15 compute-0 sudo[142543]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:16 compute-0 sudo[142698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzrjsgvxkeknuhprkhognwfolviytwds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539815.8558123-404-215553863070080/AnsiballZ_systemd.py'
Jan 27 18:50:16 compute-0 sudo[142698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:16 compute-0 python3.9[142700]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:16 compute-0 sudo[142698]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:17 compute-0 sudo[142853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-worylprfotfwjsvsahqsoxpgsdvinvtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539816.719484-404-248124296829036/AnsiballZ_systemd.py'
Jan 27 18:50:17 compute-0 sudo[142853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:17 compute-0 python3.9[142855]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:17 compute-0 sudo[142853]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:17 compute-0 sudo[143008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gboyetuntrkvfoumvnuajvdlbpmkxgrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539817.5792725-404-217506074948646/AnsiballZ_systemd.py'
Jan 27 18:50:17 compute-0 sudo[143008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:18 compute-0 python3.9[143010]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:18 compute-0 sudo[143008]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:18 compute-0 sudo[143164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wojwznhjewjasbnnifmugnydvvmhyrad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539818.4522662-404-182112883427216/AnsiballZ_systemd.py'
Jan 27 18:50:19 compute-0 sudo[143164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:19 compute-0 python3.9[143166]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:19 compute-0 sudo[143164]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:19 compute-0 sudo[143319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmbneoxilxceaquetoazpmpxppwcpdqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539819.5337615-404-139387883023561/AnsiballZ_systemd.py'
Jan 27 18:50:19 compute-0 sudo[143319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:20 compute-0 python3.9[143321]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:20 compute-0 sudo[143319]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:50:20.483 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:50:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:50:20.485 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:50:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:50:20.485 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:50:20 compute-0 sudo[143474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-korupyacjezdcwesdrjajowtbvcefiym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539820.324009-404-135649275722677/AnsiballZ_systemd.py'
Jan 27 18:50:20 compute-0 sudo[143474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:20 compute-0 python3.9[143476]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:21 compute-0 sudo[143474]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:21 compute-0 sudo[143629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qquhlnoptcmtdxcewyacycbbuzvylrqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539821.162752-404-30806624522564/AnsiballZ_systemd.py'
Jan 27 18:50:21 compute-0 sudo[143629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:21 compute-0 python3.9[143631]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:21 compute-0 sudo[143629]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:22 compute-0 sudo[143784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkpjqbaqrwxxyduflciplozbbsdtqoay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539821.9849315-404-27788662561356/AnsiballZ_systemd.py'
Jan 27 18:50:22 compute-0 sudo[143784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:22 compute-0 python3.9[143786]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:22 compute-0 sudo[143784]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:23 compute-0 sudo[143951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuxeeazaykluuwoizeznwmiixdeursst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539822.9078748-404-189594452840327/AnsiballZ_systemd.py'
Jan 27 18:50:23 compute-0 sudo[143951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:23 compute-0 podman[143913]: 2026-01-27 18:50:23.293554516 +0000 UTC m=+0.065278656 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 18:50:23 compute-0 python3.9[143957]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:23 compute-0 sudo[143951]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:24 compute-0 sudo[144113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igozopoczczguugcwyadckxxghqkgefr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539823.8171706-404-177753013381767/AnsiballZ_systemd.py'
Jan 27 18:50:24 compute-0 sudo[144113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:24 compute-0 python3.9[144115]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:24 compute-0 sudo[144113]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:24 compute-0 sudo[144279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciomyurbelawexkmrrfnpksxhahzmkmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539824.6016293-404-219718355629712/AnsiballZ_systemd.py'
Jan 27 18:50:24 compute-0 sudo[144279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:24 compute-0 podman[144242]: 2026-01-27 18:50:24.973694998 +0000 UTC m=+0.084235088 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 18:50:25 compute-0 python3.9[144288]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:25 compute-0 sudo[144279]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:25 compute-0 sudo[144448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwiwefvsbvsmaulqfvjxggienkwnbokm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539825.4372811-404-233863069825530/AnsiballZ_systemd.py'
Jan 27 18:50:25 compute-0 sudo[144448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:26 compute-0 python3.9[144450]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:27 compute-0 sudo[144448]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:27 compute-0 sudo[144603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qidnjlrqdrhsiesrowzmnuznghpglpvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539827.2941766-404-207780234115411/AnsiballZ_systemd.py'
Jan 27 18:50:27 compute-0 sudo[144603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:27 compute-0 python3.9[144605]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 18:50:27 compute-0 sudo[144603]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:28 compute-0 sudo[144758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqtqppfrqhgitpuqzsriyvzcnzbcvsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539828.310433-506-134361740612767/AnsiballZ_file.py'
Jan 27 18:50:28 compute-0 sudo[144758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:28 compute-0 python3.9[144760]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:50:28 compute-0 sudo[144758]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:29 compute-0 sudo[144910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiljulfyyrhlrqovojslhqvkjjowjxfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539829.003026-506-115518879088199/AnsiballZ_file.py'
Jan 27 18:50:29 compute-0 sudo[144910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:29 compute-0 python3.9[144912]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:50:29 compute-0 sudo[144910]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:30 compute-0 sudo[145062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbmqsrgcjulmxnrocurrjcselcqcbmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539829.633043-506-209640568893376/AnsiballZ_file.py'
Jan 27 18:50:30 compute-0 sudo[145062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:30 compute-0 python3.9[145064]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:50:30 compute-0 sudo[145062]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:30 compute-0 sudo[145214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xthugxjmkremtmdiiheptjaexpcjogoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539830.443768-506-238595450407880/AnsiballZ_file.py'
Jan 27 18:50:30 compute-0 sudo[145214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:30 compute-0 python3.9[145216]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:50:30 compute-0 sudo[145214]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:31 compute-0 sudo[145366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgnmiohdmrujcvldqkdteknupnqwlwbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539831.0729508-506-67201221577037/AnsiballZ_file.py'
Jan 27 18:50:31 compute-0 sudo[145366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:31 compute-0 python3.9[145368]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:50:31 compute-0 sudo[145366]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:32 compute-0 sudo[145518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlonpfjspuahafvwxoqdkdzmofhlbsry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539831.6936605-506-246227638864030/AnsiballZ_file.py'
Jan 27 18:50:32 compute-0 sudo[145518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:32 compute-0 python3.9[145520]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:50:32 compute-0 sudo[145518]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:32 compute-0 python3.9[145670]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:50:33 compute-0 sudo[145820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzdxijtiunmplpndyoetpwfmqvfltwkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539833.095387-557-227461974967728/AnsiballZ_stat.py'
Jan 27 18:50:33 compute-0 sudo[145820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:33 compute-0 python3.9[145822]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:33 compute-0 sudo[145820]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:34 compute-0 sudo[145945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opektkactjfxwsjnpprtumpoexyrkrfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539833.095387-557-227461974967728/AnsiballZ_copy.py'
Jan 27 18:50:34 compute-0 sudo[145945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:34 compute-0 python3.9[145947]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769539833.095387-557-227461974967728/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:34 compute-0 sudo[145945]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:35 compute-0 sudo[146097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wclopfuzzrqwzlqzfzzvtrtalxijmzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539834.908459-557-219387352415526/AnsiballZ_stat.py'
Jan 27 18:50:35 compute-0 sudo[146097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:35 compute-0 python3.9[146099]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:35 compute-0 sudo[146097]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:35 compute-0 sudo[146222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypgqooobpsagbazbdilslbukzmsvqkvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539834.908459-557-219387352415526/AnsiballZ_copy.py'
Jan 27 18:50:35 compute-0 sudo[146222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:35 compute-0 python3.9[146224]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769539834.908459-557-219387352415526/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:35 compute-0 sudo[146222]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:36 compute-0 sudo[146374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzjwckzigrtbefggpfkpocbifnqvvgbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539836.135659-557-142265601180642/AnsiballZ_stat.py'
Jan 27 18:50:36 compute-0 sudo[146374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:36 compute-0 python3.9[146376]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:36 compute-0 sudo[146374]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:37 compute-0 sudo[146499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czcnrsmuszeycmficndbbbxrsfuubinl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539836.135659-557-142265601180642/AnsiballZ_copy.py'
Jan 27 18:50:37 compute-0 sudo[146499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:37 compute-0 python3.9[146501]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769539836.135659-557-142265601180642/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:37 compute-0 sudo[146499]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:37 compute-0 sudo[146651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nclrlmgxedgenyilynuqmgkhytqcnnpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539837.3622615-557-176117384591604/AnsiballZ_stat.py'
Jan 27 18:50:37 compute-0 sudo[146651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:37 compute-0 python3.9[146653]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:37 compute-0 sudo[146651]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:38 compute-0 sudo[146776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-togullhkklwvcexbcgvjpwujtdosfuxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539837.3622615-557-176117384591604/AnsiballZ_copy.py'
Jan 27 18:50:38 compute-0 sudo[146776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:38 compute-0 python3.9[146778]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769539837.3622615-557-176117384591604/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:38 compute-0 sudo[146776]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:38 compute-0 sudo[146928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxthbaodezsuukndfbrubrxhotnihip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539838.6089652-557-40344040863560/AnsiballZ_stat.py'
Jan 27 18:50:38 compute-0 sudo[146928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:39 compute-0 python3.9[146930]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:39 compute-0 sudo[146928]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:39 compute-0 sudo[147053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edkghqifotsahqudlompflxzkdpgmoif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539838.6089652-557-40344040863560/AnsiballZ_copy.py'
Jan 27 18:50:39 compute-0 sudo[147053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:39 compute-0 python3.9[147055]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769539838.6089652-557-40344040863560/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:39 compute-0 sudo[147053]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:40 compute-0 sudo[147205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrdyrrwjdgxxbljqpasjqmhtbnekeqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539839.908357-557-12247745428708/AnsiballZ_stat.py'
Jan 27 18:50:40 compute-0 sudo[147205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:40 compute-0 python3.9[147207]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:40 compute-0 sudo[147205]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:40 compute-0 sudo[147330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgpmgzlqhrkrwwcpmyxpgvuhpqzgwpxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539839.908357-557-12247745428708/AnsiballZ_copy.py'
Jan 27 18:50:40 compute-0 sudo[147330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:40 compute-0 python3.9[147332]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769539839.908357-557-12247745428708/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:40 compute-0 sudo[147330]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:41 compute-0 sudo[147482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxrvfvntyzwfgpkpuzonbxvhwprfnice ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539841.1510136-557-23253017225967/AnsiballZ_stat.py'
Jan 27 18:50:41 compute-0 sudo[147482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:41 compute-0 python3.9[147484]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:41 compute-0 sudo[147482]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:42 compute-0 sudo[147605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncoxrdgjgmadxjlenvauqakaervzahtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539841.1510136-557-23253017225967/AnsiballZ_copy.py'
Jan 27 18:50:42 compute-0 sudo[147605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:42 compute-0 python3.9[147607]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769539841.1510136-557-23253017225967/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:42 compute-0 sudo[147605]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:42 compute-0 sudo[147757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-korcbrfqodzgdecesvsdplfcpvzkcdqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539842.4258614-557-210233754702777/AnsiballZ_stat.py'
Jan 27 18:50:42 compute-0 sudo[147757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:42 compute-0 python3.9[147759]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:42 compute-0 sudo[147757]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:43 compute-0 sudo[147882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zurcttqmjkuextzvtmlbeegahoxtksam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539842.4258614-557-210233754702777/AnsiballZ_copy.py'
Jan 27 18:50:43 compute-0 sudo[147882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:43 compute-0 python3.9[147884]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769539842.4258614-557-210233754702777/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:43 compute-0 sudo[147882]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:44 compute-0 sudo[148034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djakepthmafbmnxkxwfmyajvqzhiwtsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539843.799138-670-258090577527191/AnsiballZ_command.py'
Jan 27 18:50:44 compute-0 sudo[148034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:44 compute-0 python3.9[148036]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 27 18:50:44 compute-0 sudo[148034]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:44 compute-0 sudo[148187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmccsncwivdmqbuwwdrbnuccgkxzexzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539844.5249622-679-59964794993261/AnsiballZ_file.py'
Jan 27 18:50:44 compute-0 sudo[148187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:45 compute-0 python3.9[148189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:45 compute-0 sudo[148187]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:45 compute-0 sudo[148339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neuupkpwvhstrsnmykheeywiqicjmwkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539845.2049966-679-280540458603697/AnsiballZ_file.py'
Jan 27 18:50:45 compute-0 sudo[148339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:45 compute-0 python3.9[148341]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:45 compute-0 sudo[148339]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:46 compute-0 sudo[148491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehlqyrcfcfwcstepwpkrjeikqwsjhzer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539845.8222008-679-113834066453112/AnsiballZ_file.py'
Jan 27 18:50:46 compute-0 sudo[148491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:46 compute-0 python3.9[148493]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:46 compute-0 sudo[148491]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:46 compute-0 sudo[148643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhvsoyapllyjdoowbymkqegplwmloovv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539846.5369658-679-10032120193754/AnsiballZ_file.py'
Jan 27 18:50:46 compute-0 sudo[148643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:46 compute-0 python3.9[148645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:47 compute-0 sudo[148643]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:47 compute-0 sudo[148795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvddrrwpheszwmwqvrqmicjytqnqjzlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539847.145867-679-256993130336256/AnsiballZ_file.py'
Jan 27 18:50:47 compute-0 sudo[148795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:47 compute-0 python3.9[148797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:47 compute-0 sudo[148795]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:48 compute-0 sudo[148947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogdhvzgoivgolfzfwfoppqhwnizljzfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539847.9742825-679-42091264569590/AnsiballZ_file.py'
Jan 27 18:50:48 compute-0 sudo[148947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:48 compute-0 python3.9[148949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:48 compute-0 sudo[148947]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:48 compute-0 sudo[149099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbjknvtdhuubfjqsegcvccnzfyyefcrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539848.5817797-679-169093027484241/AnsiballZ_file.py'
Jan 27 18:50:48 compute-0 sudo[149099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:49 compute-0 python3.9[149101]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:49 compute-0 sudo[149099]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:49 compute-0 sudo[149251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mayorvukpkqnlphssydwkoldsshgdegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539849.3843248-679-174872032911071/AnsiballZ_file.py'
Jan 27 18:50:49 compute-0 sudo[149251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:49 compute-0 python3.9[149253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:49 compute-0 sudo[149251]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:50 compute-0 sudo[149403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-islkzeekebwhweebrnvdodjlfqigqeqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539850.1220183-679-33056285932279/AnsiballZ_file.py'
Jan 27 18:50:50 compute-0 sudo[149403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:50 compute-0 python3.9[149405]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:50 compute-0 sudo[149403]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:51 compute-0 sudo[149555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfahnntveavjuhbxszvffxzolodejjmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539850.8750856-679-220381067081277/AnsiballZ_file.py'
Jan 27 18:50:51 compute-0 sudo[149555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:51 compute-0 python3.9[149557]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:51 compute-0 sudo[149555]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:51 compute-0 sudo[149707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtwewxwrwmmvlhetdbvzzgnochusreas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539851.6113918-679-34029798253047/AnsiballZ_file.py'
Jan 27 18:50:51 compute-0 sudo[149707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:52 compute-0 python3.9[149709]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:52 compute-0 sudo[149707]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:52 compute-0 sudo[149859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofeyyuvahohelifmthxdejtognnbbeht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539852.2897801-679-33086672019661/AnsiballZ_file.py'
Jan 27 18:50:52 compute-0 sudo[149859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:53 compute-0 python3.9[149861]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:53 compute-0 sudo[149859]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:53 compute-0 sudo[150022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onoskasdrcndfvcndzpbroetptsnbfgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539853.211432-679-181940113527574/AnsiballZ_file.py'
Jan 27 18:50:53 compute-0 sudo[150022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:53 compute-0 podman[149985]: 2026-01-27 18:50:53.517436045 +0000 UTC m=+0.085306375 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 18:50:53 compute-0 python3.9[150030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:53 compute-0 sudo[150022]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:54 compute-0 sudo[150180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uopagrtrzgrqgsphwdxnbaabgcpigfzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539853.8502057-679-260113521758678/AnsiballZ_file.py'
Jan 27 18:50:54 compute-0 sudo[150180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:54 compute-0 python3.9[150182]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:54 compute-0 sudo[150180]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:54 compute-0 sudo[150332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nksnzthnsdaodwbitzjefeopkjxbtdsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539854.5327652-778-159933283121167/AnsiballZ_stat.py'
Jan 27 18:50:54 compute-0 sudo[150332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:55 compute-0 python3.9[150334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:55 compute-0 sudo[150332]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:55 compute-0 podman[150383]: 2026-01-27 18:50:55.383945887 +0000 UTC m=+0.151010021 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 18:50:55 compute-0 sudo[150481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncwoywpsutcjnqzouxniayspnjxmirdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539854.5327652-778-159933283121167/AnsiballZ_copy.py'
Jan 27 18:50:55 compute-0 sudo[150481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:55 compute-0 python3.9[150483]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539854.5327652-778-159933283121167/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:55 compute-0 sudo[150481]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:56 compute-0 sudo[150633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfiemcgitjljraqzpkrrqfgxprgaaeze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539855.781696-778-73382338489271/AnsiballZ_stat.py'
Jan 27 18:50:56 compute-0 sudo[150633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:56 compute-0 python3.9[150635]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:56 compute-0 sudo[150633]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:56 compute-0 sudo[150756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yffxlismfmkwcjmwznroskozxxgjbrob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539855.781696-778-73382338489271/AnsiballZ_copy.py'
Jan 27 18:50:56 compute-0 sudo[150756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:57 compute-0 python3.9[150758]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539855.781696-778-73382338489271/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:57 compute-0 sudo[150756]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:57 compute-0 sudo[150908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjfdlwlmmupptvgmffmvxceqqpbvhbqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539857.1701963-778-199402512006056/AnsiballZ_stat.py'
Jan 27 18:50:57 compute-0 sudo[150908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:57 compute-0 python3.9[150910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:57 compute-0 sudo[150908]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:57 compute-0 sudo[151031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieyqrtxwxvmzihhfobmwlofxfujqhanz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539857.1701963-778-199402512006056/AnsiballZ_copy.py'
Jan 27 18:50:58 compute-0 sudo[151031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:58 compute-0 python3.9[151033]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539857.1701963-778-199402512006056/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:58 compute-0 sudo[151031]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:58 compute-0 sudo[151183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtzfqefxzueipseigllduihcvfoslikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539858.3402648-778-200582797749071/AnsiballZ_stat.py'
Jan 27 18:50:58 compute-0 sudo[151183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:58 compute-0 python3.9[151185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:50:58 compute-0 sudo[151183]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:59 compute-0 sudo[151306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqmdsihteeoncbmrzfylcfhmkubzbkde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539858.3402648-778-200582797749071/AnsiballZ_copy.py'
Jan 27 18:50:59 compute-0 sudo[151306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:50:59 compute-0 python3.9[151308]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539858.3402648-778-200582797749071/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:50:59 compute-0 sudo[151306]: pam_unix(sudo:session): session closed for user root
Jan 27 18:50:59 compute-0 sudo[151458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yreniwfxxgbirmrvqlyvswyyhikgxxub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539859.4740512-778-237812341518405/AnsiballZ_stat.py'
Jan 27 18:50:59 compute-0 sudo[151458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:00 compute-0 python3.9[151460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:00 compute-0 sudo[151458]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:00 compute-0 sudo[151581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwesaxpsgfmzaiknwioiqaebbjxmopyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539859.4740512-778-237812341518405/AnsiballZ_copy.py'
Jan 27 18:51:00 compute-0 sudo[151581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:00 compute-0 python3.9[151583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539859.4740512-778-237812341518405/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:00 compute-0 sudo[151581]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:01 compute-0 sudo[151733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wruttyswwqwakrxgbeiuxrkgxuvphysp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539861.048905-778-198761820636722/AnsiballZ_stat.py'
Jan 27 18:51:01 compute-0 sudo[151733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:01 compute-0 python3.9[151735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:01 compute-0 sudo[151733]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:01 compute-0 sudo[151856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-garrynqjapjzhcqdcampwtxhlejmlqzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539861.048905-778-198761820636722/AnsiballZ_copy.py'
Jan 27 18:51:01 compute-0 sudo[151856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:02 compute-0 python3.9[151858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539861.048905-778-198761820636722/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:02 compute-0 sudo[151856]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:02 compute-0 sudo[152008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjnfiwnsldjbadpwbdviuivkapmaphya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539862.3620195-778-44464143834729/AnsiballZ_stat.py'
Jan 27 18:51:02 compute-0 sudo[152008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:02 compute-0 python3.9[152010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:02 compute-0 sudo[152008]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:03 compute-0 sudo[152131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaoxemnbdjnzamiwplwqvsseyqicobrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539862.3620195-778-44464143834729/AnsiballZ_copy.py'
Jan 27 18:51:03 compute-0 sudo[152131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:03 compute-0 python3.9[152133]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539862.3620195-778-44464143834729/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:03 compute-0 sudo[152131]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:04 compute-0 sudo[152283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhhjdohjnbzvgnqrqveiatajfyhfzfro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539863.6094358-778-53329601975626/AnsiballZ_stat.py'
Jan 27 18:51:04 compute-0 sudo[152283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:04 compute-0 python3.9[152285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:04 compute-0 sudo[152283]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:04 compute-0 sudo[152406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhxitprmgvuksxzysdbohvibpewcxrsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539863.6094358-778-53329601975626/AnsiballZ_copy.py'
Jan 27 18:51:04 compute-0 sudo[152406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:04 compute-0 python3.9[152408]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539863.6094358-778-53329601975626/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:04 compute-0 sudo[152406]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:05 compute-0 sudo[152560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xknhvrgjvryrnkyxvalwfoihlgpfebbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539865.0212438-778-277433603591868/AnsiballZ_stat.py'
Jan 27 18:51:05 compute-0 sudo[152560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:05 compute-0 sshd-session[152409]: Received disconnect from 91.224.92.190 port 54126:11:  [preauth]
Jan 27 18:51:05 compute-0 sshd-session[152409]: Disconnected from authenticating user root 91.224.92.190 port 54126 [preauth]
Jan 27 18:51:05 compute-0 python3.9[152562]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:05 compute-0 sudo[152560]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:05 compute-0 sudo[152683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmdwtzfwrtvpfrdxxkihrykexbjplidy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539865.0212438-778-277433603591868/AnsiballZ_copy.py'
Jan 27 18:51:05 compute-0 sudo[152683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:06 compute-0 python3.9[152685]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539865.0212438-778-277433603591868/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:06 compute-0 sudo[152683]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:06 compute-0 sudo[152835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oigauriagtjwwztcfnrnxjvlruhokozw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539866.2765944-778-98129352800447/AnsiballZ_stat.py'
Jan 27 18:51:06 compute-0 sudo[152835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:06 compute-0 python3.9[152837]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:06 compute-0 sudo[152835]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:07 compute-0 sudo[152958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpefpmxqhfoqrhadmmtzumeisfrkseaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539866.2765944-778-98129352800447/AnsiballZ_copy.py'
Jan 27 18:51:07 compute-0 sudo[152958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:07 compute-0 python3.9[152960]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539866.2765944-778-98129352800447/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:07 compute-0 sudo[152958]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:07 compute-0 sudo[153110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoqqnxoensplozqojkpcimegnjfvlyos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539867.7044358-778-3943756914016/AnsiballZ_stat.py'
Jan 27 18:51:07 compute-0 sudo[153110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:08 compute-0 python3.9[153112]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:08 compute-0 sudo[153110]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:08 compute-0 sudo[153233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejoipbbdubusblawzvuuaycmzqrzghnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539867.7044358-778-3943756914016/AnsiballZ_copy.py'
Jan 27 18:51:08 compute-0 sudo[153233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:08 compute-0 python3.9[153235]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539867.7044358-778-3943756914016/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:08 compute-0 sudo[153233]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:09 compute-0 sudo[153385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikggbwijwglotcgfluvbvowjvgyhtund ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539868.891962-778-141999725014183/AnsiballZ_stat.py'
Jan 27 18:51:09 compute-0 sudo[153385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:09 compute-0 python3.9[153387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:09 compute-0 sudo[153385]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:09 compute-0 sudo[153508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojndobimkaqyqtuucnfevvpxrqfmiybg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539868.891962-778-141999725014183/AnsiballZ_copy.py'
Jan 27 18:51:09 compute-0 sudo[153508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:09 compute-0 python3.9[153510]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539868.891962-778-141999725014183/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:09 compute-0 sudo[153508]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:10 compute-0 sudo[153660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aklsjqkclskswmimjjmncunpwmkogtri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539870.0582402-778-268547086977648/AnsiballZ_stat.py'
Jan 27 18:51:10 compute-0 sudo[153660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:10 compute-0 python3.9[153662]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:10 compute-0 sudo[153660]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:10 compute-0 sudo[153783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crgvbhpgsiixopqlmymcyxzmxydixnor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539870.0582402-778-268547086977648/AnsiballZ_copy.py'
Jan 27 18:51:10 compute-0 sudo[153783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:11 compute-0 python3.9[153785]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539870.0582402-778-268547086977648/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:11 compute-0 sudo[153783]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:11 compute-0 sudo[153935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfvkbkciiszkwzdnnzpphteyupboqot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539871.2908964-778-182534722972506/AnsiballZ_stat.py'
Jan 27 18:51:11 compute-0 sudo[153935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:11 compute-0 python3.9[153937]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:11 compute-0 sudo[153935]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:12 compute-0 sudo[154058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usibwtmqphcpxlxdukctoyybqnhgtdyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539871.2908964-778-182534722972506/AnsiballZ_copy.py'
Jan 27 18:51:12 compute-0 sudo[154058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:12 compute-0 python3.9[154060]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539871.2908964-778-182534722972506/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:12 compute-0 sudo[154058]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:13 compute-0 python3.9[154210]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:51:13 compute-0 sudo[154363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drvbgmdpysckzigrlhkssjjfkjvreyri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539873.4300313-984-219626469900416/AnsiballZ_seboolean.py'
Jan 27 18:51:13 compute-0 sudo[154363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:14 compute-0 python3.9[154365]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 27 18:51:15 compute-0 sudo[154363]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:15 compute-0 sudo[154519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spbcoljoqzlgcukasvkzprhvdnhkgjdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539875.5341206-992-246050627237105/AnsiballZ_copy.py'
Jan 27 18:51:15 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 27 18:51:15 compute-0 sudo[154519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:16 compute-0 python3.9[154521]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:16 compute-0 sudo[154519]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:16 compute-0 sudo[154671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaoymlchhhuyvekqizeengkycgmmweag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539876.1786177-992-75579774997635/AnsiballZ_copy.py'
Jan 27 18:51:16 compute-0 sudo[154671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:16 compute-0 python3.9[154673]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:16 compute-0 sudo[154671]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:17 compute-0 sudo[154823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azuqfzjouynwthvpnprqstbttcxshoes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539877.0126193-992-263266371249555/AnsiballZ_copy.py'
Jan 27 18:51:17 compute-0 sudo[154823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:17 compute-0 python3.9[154825]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:17 compute-0 sudo[154823]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:17 compute-0 sudo[154975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejlesreuafxhtagffhxghtuwabgfflwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539877.6781902-992-228677212062503/AnsiballZ_copy.py'
Jan 27 18:51:17 compute-0 sudo[154975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:18 compute-0 python3.9[154977]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:18 compute-0 sudo[154975]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:18 compute-0 sudo[155127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgpmnkgjbrrclpsgftccshqhahlmhlva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539878.2854733-992-44034940293960/AnsiballZ_copy.py'
Jan 27 18:51:18 compute-0 sudo[155127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:18 compute-0 python3.9[155129]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:18 compute-0 sudo[155127]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:19 compute-0 sudo[155279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffpktydxdglhketddidmgziwpvusvkdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539878.9840598-1028-190216404114817/AnsiballZ_copy.py'
Jan 27 18:51:19 compute-0 sudo[155279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:19 compute-0 python3.9[155281]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:19 compute-0 sudo[155279]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:19 compute-0 sudo[155431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmvhfnlswcabhpxmgniwetxgkgitwbft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539879.6480014-1028-264455451999784/AnsiballZ_copy.py'
Jan 27 18:51:19 compute-0 sudo[155431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:20 compute-0 python3.9[155433]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:20 compute-0 sudo[155431]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:51:20.486 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:51:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:51:20.487 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:51:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:51:20.487 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:51:20 compute-0 sudo[155583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgpjoasmyiohqeehxjrkezelzvdllcph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539880.3588295-1028-219300215939617/AnsiballZ_copy.py'
Jan 27 18:51:20 compute-0 sudo[155583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:20 compute-0 python3.9[155585]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:20 compute-0 sudo[155583]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:21 compute-0 sudo[155735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyhvvudyegjndmmdbzedytdizipykcgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539881.125923-1028-232351008988663/AnsiballZ_copy.py'
Jan 27 18:51:21 compute-0 sudo[155735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:21 compute-0 python3.9[155737]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:21 compute-0 sudo[155735]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:22 compute-0 sudo[155887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lssglgwgjmcpvgstuetxxirqthaogzap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539881.7892544-1028-8235994107667/AnsiballZ_copy.py'
Jan 27 18:51:22 compute-0 sudo[155887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:22 compute-0 python3.9[155889]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:22 compute-0 sudo[155887]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:22 compute-0 sudo[156039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inidtvderlxzhsmqztsgoetsetqvqqjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539882.701911-1064-147548487618853/AnsiballZ_systemd.py'
Jan 27 18:51:22 compute-0 sudo[156039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:23 compute-0 python3.9[156041]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:51:23 compute-0 systemd[1]: Reloading.
Jan 27 18:51:23 compute-0 systemd-rc-local-generator[156069]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:51:23 compute-0 systemd-sysv-generator[156072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:51:23 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 27 18:51:23 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 27 18:51:23 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 27 18:51:23 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 27 18:51:23 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 27 18:51:23 compute-0 podman[156078]: 2026-01-27 18:51:23.688819472 +0000 UTC m=+0.085635984 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 18:51:23 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 27 18:51:23 compute-0 sudo[156039]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:24 compute-0 sudo[156252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzawtepjnrpteaktopwffvikxbzycokj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539883.902241-1064-61079191325279/AnsiballZ_systemd.py'
Jan 27 18:51:24 compute-0 sudo[156252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:24 compute-0 python3.9[156254]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:51:24 compute-0 systemd[1]: Reloading.
Jan 27 18:51:24 compute-0 systemd-rc-local-generator[156276]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:51:24 compute-0 systemd-sysv-generator[156281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:51:24 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 27 18:51:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 27 18:51:24 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 27 18:51:24 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 27 18:51:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 27 18:51:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 27 18:51:24 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 18:51:24 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 27 18:51:24 compute-0 sudo[156252]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:25 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 27 18:51:25 compute-0 sudo[156469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxmqvjkvkkfxoxegwvcxwckatrdnkegn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539885.045248-1064-18645215573972/AnsiballZ_systemd.py'
Jan 27 18:51:25 compute-0 sudo[156469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:25 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 27 18:51:25 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 27 18:51:25 compute-0 podman[156472]: 2026-01-27 18:51:25.570556768 +0000 UTC m=+0.131310120 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 18:51:25 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 27 18:51:25 compute-0 python3.9[156471]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:51:25 compute-0 systemd[1]: Reloading.
Jan 27 18:51:25 compute-0 systemd-rc-local-generator[156536]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:51:25 compute-0 systemd-sysv-generator[156540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:51:25 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 27 18:51:25 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 27 18:51:25 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 27 18:51:25 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 27 18:51:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 18:51:26 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 18:51:26 compute-0 sudo[156469]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:26 compute-0 setroubleshoot[156366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2e160ce2-64bf-4487-b2cd-52296edb7986
Jan 27 18:51:26 compute-0 setroubleshoot[156366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 27 18:51:26 compute-0 setroubleshoot[156366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2e160ce2-64bf-4487-b2cd-52296edb7986
Jan 27 18:51:26 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 18:51:26 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 18:51:26 compute-0 setroubleshoot[156366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 27 18:51:26 compute-0 sudo[156720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unoeflqwnohgxrptsvnyzklqqoouwiwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539886.2041829-1064-99534679954583/AnsiballZ_systemd.py'
Jan 27 18:51:26 compute-0 sudo[156720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:26 compute-0 python3.9[156722]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:51:26 compute-0 systemd[1]: Reloading.
Jan 27 18:51:27 compute-0 systemd-rc-local-generator[156747]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:51:27 compute-0 systemd-sysv-generator[156753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:51:27 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 27 18:51:27 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 27 18:51:27 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 27 18:51:27 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 27 18:51:27 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 27 18:51:27 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 27 18:51:27 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 27 18:51:27 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 27 18:51:27 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 27 18:51:27 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 27 18:51:27 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 18:51:27 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 27 18:51:27 compute-0 sudo[156720]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:27 compute-0 sudo[156935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkgvfvopyjttuoouhpjpgpbzgkwaadqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539887.5043156-1064-250019393564688/AnsiballZ_systemd.py'
Jan 27 18:51:27 compute-0 sudo[156935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:28 compute-0 python3.9[156937]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:51:28 compute-0 systemd[1]: Reloading.
Jan 27 18:51:28 compute-0 systemd-rc-local-generator[156966]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:51:28 compute-0 systemd-sysv-generator[156969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:51:28 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 27 18:51:28 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 27 18:51:28 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 27 18:51:28 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 27 18:51:28 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 27 18:51:28 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 27 18:51:28 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 27 18:51:28 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 27 18:51:28 compute-0 sudo[156935]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:29 compute-0 sudo[157148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkshkvedfzbhggqcoxjmqjkyteyrkame ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539888.7369719-1101-25791325623531/AnsiballZ_file.py'
Jan 27 18:51:29 compute-0 sudo[157148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:29 compute-0 python3.9[157150]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:29 compute-0 sudo[157148]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:29 compute-0 sudo[157300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lejtcngqmpzvksizstjithuwddonsplf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539889.4110963-1109-82986896102440/AnsiballZ_find.py'
Jan 27 18:51:29 compute-0 sudo[157300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:29 compute-0 python3.9[157302]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 18:51:29 compute-0 sudo[157300]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:30 compute-0 sudo[157452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djlpdplehuqxeiqshfejlptwmhalamhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539890.5348327-1123-274121835957597/AnsiballZ_stat.py'
Jan 27 18:51:30 compute-0 sudo[157452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:31 compute-0 python3.9[157454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:31 compute-0 sudo[157452]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:31 compute-0 sudo[157575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffpxwbtdqyaunpochmlvrranclhgkeem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539890.5348327-1123-274121835957597/AnsiballZ_copy.py'
Jan 27 18:51:31 compute-0 sudo[157575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:31 compute-0 python3.9[157577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539890.5348327-1123-274121835957597/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:31 compute-0 sudo[157575]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:32 compute-0 sudo[157727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nltbomrzusmepotvggqpdqgjmdsximwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539892.089396-1139-156904663609703/AnsiballZ_file.py'
Jan 27 18:51:32 compute-0 sudo[157727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:32 compute-0 python3.9[157729]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:32 compute-0 sudo[157727]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:33 compute-0 sudo[157879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqogeocecqiuwvqvfzkcabtuuqidckkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539892.8786323-1147-261128103333517/AnsiballZ_stat.py'
Jan 27 18:51:33 compute-0 sudo[157879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:36 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 27 18:51:36 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 27 18:51:45 compute-0 sshd-session[157883]: Invalid user sol from 45.148.10.240 port 59050
Jan 27 18:51:45 compute-0 sshd-session[157883]: Connection closed by invalid user sol 45.148.10.240 port 59050 [preauth]
Jan 27 18:51:46 compute-0 python3.9[157881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:46 compute-0 sudo[157879]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:47 compute-0 sudo[157960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znuqsdwazelnxceevhmeuwwyrygiahmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539892.8786323-1147-261128103333517/AnsiballZ_file.py'
Jan 27 18:51:47 compute-0 sudo[157960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:47 compute-0 python3.9[157962]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:47 compute-0 sudo[157960]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:48 compute-0 sudo[158112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxtcpgjhrecpxfewtmidlmdojwfuwsyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539907.6982796-1159-6820302636230/AnsiballZ_stat.py'
Jan 27 18:51:48 compute-0 sudo[158112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:48 compute-0 python3.9[158114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:48 compute-0 sudo[158112]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:48 compute-0 sudo[158190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntakmswlphobimzicewuukpqyjxayvjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539907.6982796-1159-6820302636230/AnsiballZ_file.py'
Jan 27 18:51:48 compute-0 sudo[158190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:48 compute-0 python3.9[158192]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.mw0c2v42 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:48 compute-0 sudo[158190]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:49 compute-0 sudo[158342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybgwolssoaubofuvdznqllieljjdatbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539909.0124118-1171-108206819460668/AnsiballZ_stat.py'
Jan 27 18:51:49 compute-0 sudo[158342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:50 compute-0 python3.9[158344]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:51:50 compute-0 sudo[158342]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:50 compute-0 sudo[158420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhnlygvjgacfqeqljbktsirptmmbqwga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539909.0124118-1171-108206819460668/AnsiballZ_file.py'
Jan 27 18:51:50 compute-0 sudo[158420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:54 compute-0 podman[158423]: 2026-01-27 18:51:54.301733213 +0000 UTC m=+0.068299713 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 18:51:56 compute-0 podman[158445]: 2026-01-27 18:51:56.35413963 +0000 UTC m=+0.124105233 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:51:59 compute-0 python3.9[158422]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:51:59 compute-0 sudo[158420]: pam_unix(sudo:session): session closed for user root
Jan 27 18:51:59 compute-0 sudo[158622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhngxqohcgzdxvknvgrzfjeeefdtdjjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539919.3926663-1184-102244573462491/AnsiballZ_command.py'
Jan 27 18:51:59 compute-0 sudo[158622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:51:59 compute-0 python3.9[158624]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:51:59 compute-0 sudo[158622]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:00 compute-0 sudo[158775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvnaeuirvjjcwvzayhyehehhbqcxuimz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769539920.1218736-1192-261627419868643/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 18:52:00 compute-0 sudo[158775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:00 compute-0 python3[158777]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 18:52:00 compute-0 sudo[158775]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:01 compute-0 sudo[158927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoqozagxbovoojkcjsecyzqoyfoprarm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539920.9714372-1200-265995423113503/AnsiballZ_stat.py'
Jan 27 18:52:01 compute-0 sudo[158927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:01 compute-0 python3.9[158929]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:52:01 compute-0 sudo[158927]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:02 compute-0 sudo[159005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmyoatewbzrneehpowevabuynyhrllde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539920.9714372-1200-265995423113503/AnsiballZ_file.py'
Jan 27 18:52:02 compute-0 sudo[159005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:52:20.489 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:52:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:52:20.492 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:52:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:52:20.492 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:52:29 compute-0 podman[159008]: 2026-01-27 18:52:29.96829342 +0000 UTC m=+4.734728997 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 18:52:30 compute-0 podman[159020]: 2026-01-27 18:52:30.013370649 +0000 UTC m=+2.770342833 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 18:52:30 compute-0 python3.9[159007]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:30 compute-0 sudo[159005]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:30 compute-0 sudo[159200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twjayddvxslaotnbqalnoqsuawvsyvvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539950.2928128-1212-209854542152238/AnsiballZ_stat.py'
Jan 27 18:52:30 compute-0 sudo[159200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:30 compute-0 python3.9[159202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:52:30 compute-0 sudo[159200]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:31 compute-0 sudo[159325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quhohriojdeigxeggbvwmppexuqbpkna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539950.2928128-1212-209854542152238/AnsiballZ_copy.py'
Jan 27 18:52:31 compute-0 sudo[159325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:31 compute-0 python3.9[159327]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539950.2928128-1212-209854542152238/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:31 compute-0 sudo[159325]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:31 compute-0 sudo[159477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btutshbnyjwjuiqeasihehvkvaplkmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539951.6754906-1227-206029981880995/AnsiballZ_stat.py'
Jan 27 18:52:31 compute-0 sudo[159477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:32 compute-0 python3.9[159479]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:52:32 compute-0 sudo[159477]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:32 compute-0 sudo[159555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lidcyrejpejwxrfsfiickkgljnnkriss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539951.6754906-1227-206029981880995/AnsiballZ_file.py'
Jan 27 18:52:32 compute-0 sudo[159555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:32 compute-0 python3.9[159557]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:32 compute-0 sudo[159555]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:33 compute-0 sudo[159707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etanhmmegrvsvcpxxixxeocuuoeqojbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539952.8293204-1239-248257512702444/AnsiballZ_stat.py'
Jan 27 18:52:33 compute-0 sudo[159707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:33 compute-0 python3.9[159709]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:52:33 compute-0 sudo[159707]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:33 compute-0 sudo[159785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixiqujojymmoddofowgilgvhasyhjwce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539952.8293204-1239-248257512702444/AnsiballZ_file.py'
Jan 27 18:52:33 compute-0 sudo[159785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:33 compute-0 python3.9[159787]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:33 compute-0 sudo[159785]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:34 compute-0 sudo[159937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxgtjuykzbhtmodwnwwbwcetwczqpumn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539953.9177306-1251-53146573339930/AnsiballZ_stat.py'
Jan 27 18:52:34 compute-0 sudo[159937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:34 compute-0 python3.9[159939]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:52:34 compute-0 sudo[159937]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:34 compute-0 sudo[160062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kslealgpspjqidtjxtzynkuprrzttdft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539953.9177306-1251-53146573339930/AnsiballZ_copy.py'
Jan 27 18:52:34 compute-0 sudo[160062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:35 compute-0 python3.9[160064]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769539953.9177306-1251-53146573339930/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:35 compute-0 sudo[160062]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:35 compute-0 sudo[160214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkrzwnahpqogfjjokpeezfrzhltakkcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539955.1763372-1266-188444782318875/AnsiballZ_file.py'
Jan 27 18:52:35 compute-0 sudo[160214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:35 compute-0 python3.9[160216]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:35 compute-0 sudo[160214]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:36 compute-0 sudo[160366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daarxioysqkqvicalchjxdxcjsjvmlzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539955.7949219-1274-224326935148021/AnsiballZ_command.py'
Jan 27 18:52:36 compute-0 sudo[160366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:36 compute-0 python3.9[160368]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:52:36 compute-0 sudo[160366]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:37 compute-0 sudo[160521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drfuvmzjrxdraqtakayzpobxddfsgtdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539956.549006-1282-116248299810376/AnsiballZ_blockinfile.py'
Jan 27 18:52:37 compute-0 sudo[160521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:37 compute-0 python3.9[160523]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:37 compute-0 sudo[160521]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:37 compute-0 sudo[160673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sayukjisqqmtyoewaglyhapynohawdrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539957.556085-1291-24595217199897/AnsiballZ_command.py'
Jan 27 18:52:37 compute-0 sudo[160673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:38 compute-0 python3.9[160675]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:52:38 compute-0 sudo[160673]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:38 compute-0 sudo[160826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mocsljlzqjlgmpijbgnmxhswubkrzbdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539958.3013983-1299-38139606725684/AnsiballZ_stat.py'
Jan 27 18:52:38 compute-0 sudo[160826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:38 compute-0 python3.9[160828]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:52:38 compute-0 sudo[160826]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:39 compute-0 sudo[160980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhuntzomdkohnfozuapedgtyymangvwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539959.0952115-1307-53593898179189/AnsiballZ_command.py'
Jan 27 18:52:39 compute-0 sudo[160980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:50 compute-0 python3.9[160982]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:52:50 compute-0 sudo[160980]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:50 compute-0 sudo[161135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intwjgaqbhazsyzejagtmnczheznfwxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539970.5785227-1315-144879430144980/AnsiballZ_file.py'
Jan 27 18:52:50 compute-0 sudo[161135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:51 compute-0 python3.9[161137]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:51 compute-0 sudo[161135]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:51 compute-0 sudo[161287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbpnkcetyypbwhgwoxuntqrvxabycmqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539971.231731-1323-148737381391688/AnsiballZ_stat.py'
Jan 27 18:52:51 compute-0 sudo[161287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:51 compute-0 python3.9[161289]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:52:51 compute-0 sudo[161287]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:52 compute-0 sudo[161410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fajdcehwineybvjbmtankrfrcvnqtvcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539971.231731-1323-148737381391688/AnsiballZ_copy.py'
Jan 27 18:52:52 compute-0 sudo[161410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:52 compute-0 python3.9[161412]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539971.231731-1323-148737381391688/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:52 compute-0 sudo[161410]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:52 compute-0 sudo[161562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiylcznouabfaugkerinoeqyqgfocvol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539972.5673554-1338-176574380759177/AnsiballZ_stat.py'
Jan 27 18:52:52 compute-0 sudo[161562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:53 compute-0 python3.9[161564]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:52:53 compute-0 sudo[161562]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:53 compute-0 sudo[161685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saecpojxkjieftbffaqamqvuucfshlxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539972.5673554-1338-176574380759177/AnsiballZ_copy.py'
Jan 27 18:52:53 compute-0 sudo[161685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:53 compute-0 python3.9[161687]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539972.5673554-1338-176574380759177/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:53 compute-0 sudo[161685]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:53 compute-0 sudo[161837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgolnthmrthinolkpyznymzjgyptxeeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539973.703202-1353-112558963699585/AnsiballZ_stat.py'
Jan 27 18:52:53 compute-0 sudo[161837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:54 compute-0 python3.9[161839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:52:54 compute-0 sudo[161837]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:54 compute-0 sudo[161960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjdvjlgjtvfutpypiisrsfqpourkdahd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539973.703202-1353-112558963699585/AnsiballZ_copy.py'
Jan 27 18:52:54 compute-0 sudo[161960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:54 compute-0 python3.9[161962]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769539973.703202-1353-112558963699585/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:52:54 compute-0 sudo[161960]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:55 compute-0 sudo[162112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blmjrmgsdtxdmaekivafuwzliebabhrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539975.0877635-1368-30489823743743/AnsiballZ_systemd.py'
Jan 27 18:52:55 compute-0 sudo[162112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:55 compute-0 python3.9[162114]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:52:55 compute-0 systemd[1]: Reloading.
Jan 27 18:52:55 compute-0 systemd-sysv-generator[162145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:52:55 compute-0 systemd-rc-local-generator[162142]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:52:55 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 27 18:52:56 compute-0 sudo[162112]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:56 compute-0 sudo[162304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkgmbascpzmlvgootcffsgrbqtyboqko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539976.163311-1376-135373661732484/AnsiballZ_systemd.py'
Jan 27 18:52:56 compute-0 sudo[162304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:52:56 compute-0 python3.9[162306]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 18:52:56 compute-0 systemd[1]: Reloading.
Jan 27 18:52:56 compute-0 systemd-rc-local-generator[162334]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:52:56 compute-0 systemd-sysv-generator[162337]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:52:57 compute-0 systemd[1]: Reloading.
Jan 27 18:52:57 compute-0 systemd-sysv-generator[162376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:52:57 compute-0 systemd-rc-local-generator[162372]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:52:57 compute-0 sudo[162304]: pam_unix(sudo:session): session closed for user root
Jan 27 18:52:57 compute-0 sshd-session[107472]: Connection closed by 192.168.122.31 port 36450
Jan 27 18:52:57 compute-0 sshd-session[107469]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:52:57 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 27 18:52:57 compute-0 systemd[1]: session-22.scope: Consumed 3min 36.750s CPU time.
Jan 27 18:52:57 compute-0 systemd-logind[795]: Session 22 logged out. Waiting for processes to exit.
Jan 27 18:52:57 compute-0 systemd-logind[795]: Removed session 22.
Jan 27 18:53:00 compute-0 podman[162404]: 2026-01-27 18:53:00.288752941 +0000 UTC m=+0.063960224 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 18:53:00 compute-0 podman[162403]: 2026-01-27 18:53:00.31837556 +0000 UTC m=+0.097873558 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 18:53:04 compute-0 sshd-session[162445]: Accepted publickey for zuul from 192.168.122.31 port 57630 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:53:04 compute-0 systemd-logind[795]: New session 23 of user zuul.
Jan 27 18:53:04 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 27 18:53:04 compute-0 sshd-session[162445]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:53:05 compute-0 python3.9[162598]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:53:06 compute-0 python3.9[162752]: ansible-ansible.builtin.service_facts Invoked
Jan 27 18:53:06 compute-0 network[162769]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 18:53:06 compute-0 network[162770]: 'network-scripts' will be removed from distribution in near future.
Jan 27 18:53:06 compute-0 network[162771]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 18:53:09 compute-0 sudo[163040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtkdfmllvhhdibxzxtdwmaziemmcouwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539989.7147458-42-83123516193959/AnsiballZ_setup.py'
Jan 27 18:53:10 compute-0 sudo[163040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:10 compute-0 python3.9[163042]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 18:53:10 compute-0 sudo[163040]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:11 compute-0 sudo[163124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htdckyvyxpaxhukfjxyjgkgrunbtqvzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539989.7147458-42-83123516193959/AnsiballZ_dnf.py'
Jan 27 18:53:11 compute-0 sudo[163124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:11 compute-0 python3.9[163126]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:53:16 compute-0 sudo[163124]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:17 compute-0 sudo[163277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjrjznhsdsxrtcsfxsanfgbdvnvjmicx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539996.9562776-54-29770739708175/AnsiballZ_stat.py'
Jan 27 18:53:17 compute-0 sudo[163277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:17 compute-0 python3.9[163279]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:53:17 compute-0 sudo[163277]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:18 compute-0 sudo[163429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szdoylwtbqsswmjelpdakiasgoxkhvgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539997.8116117-64-77499879694493/AnsiballZ_command.py'
Jan 27 18:53:18 compute-0 sudo[163429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:18 compute-0 python3.9[163431]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:53:18 compute-0 sudo[163429]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:19 compute-0 sudo[163582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwzuvphinidchkpbsbfkgbrrseyydctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539998.7461646-74-44655903437893/AnsiballZ_stat.py'
Jan 27 18:53:19 compute-0 sudo[163582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:19 compute-0 python3.9[163584]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:53:19 compute-0 sudo[163582]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:19 compute-0 sudo[163734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lccxaximdpucktztjwtcbocqvcicogeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769539999.399302-82-127939974645696/AnsiballZ_command.py'
Jan 27 18:53:19 compute-0 sudo[163734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:19 compute-0 python3.9[163736]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:53:19 compute-0 sudo[163734]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:20 compute-0 sudo[163887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kibrupgkvzezuxiivgdqyzrsidwhpbco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540000.071627-90-124863192927482/AnsiballZ_stat.py'
Jan 27 18:53:20 compute-0 sudo[163887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:53:20.490 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:53:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:53:20.491 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:53:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:53:20.491 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:53:20 compute-0 python3.9[163889]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:53:20 compute-0 sudo[163887]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:21 compute-0 sudo[164010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsaebzlkyytasvuxdcgsdeksjvjrsaku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540000.071627-90-124863192927482/AnsiballZ_copy.py'
Jan 27 18:53:21 compute-0 sudo[164010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:21 compute-0 python3.9[164012]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540000.071627-90-124863192927482/.source.iscsi _original_basename=.8ckjc_0o follow=False checksum=9dbb734c7087843310ffe03b80b388665476c3f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:21 compute-0 sudo[164010]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:21 compute-0 sudo[164162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyylvwjinnpwnhmxksxwsaklpjvwtrxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540001.4662397-105-203118652155291/AnsiballZ_file.py'
Jan 27 18:53:21 compute-0 sudo[164162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:22 compute-0 python3.9[164164]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:22 compute-0 sudo[164162]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:22 compute-0 sudo[164314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbiowhfrebqnspkpptalbzcqajugrwrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540002.3069727-113-3638946902208/AnsiballZ_lineinfile.py'
Jan 27 18:53:22 compute-0 sudo[164314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:22 compute-0 python3.9[164316]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:22 compute-0 sudo[164314]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:23 compute-0 sudo[164466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iihknkmrzpkxvnyvvpermwmogxmnwpja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540003.2044084-122-59369627946311/AnsiballZ_systemd_service.py'
Jan 27 18:53:23 compute-0 sudo[164466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:24 compute-0 python3.9[164468]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:53:24 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 27 18:53:24 compute-0 sudo[164466]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:24 compute-0 sudo[164622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obtibtikmemumkheazkzynupaukzkxqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540004.4208703-130-16391498243727/AnsiballZ_systemd_service.py'
Jan 27 18:53:24 compute-0 sudo[164622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:24 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 27 18:53:25 compute-0 python3.9[164624]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:53:25 compute-0 systemd[1]: Reloading.
Jan 27 18:53:25 compute-0 systemd-rc-local-generator[164655]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:53:25 compute-0 systemd-sysv-generator[164658]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:53:25 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 18:53:25 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 27 18:53:25 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 27 18:53:25 compute-0 systemd[1]: Started Open-iSCSI.
Jan 27 18:53:25 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 27 18:53:25 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 27 18:53:25 compute-0 sudo[164622]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:26 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 18:53:26 compute-0 python3.9[164827]: ansible-ansible.builtin.service_facts Invoked
Jan 27 18:53:26 compute-0 network[164844]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 18:53:26 compute-0 network[164845]: 'network-scripts' will be removed from distribution in near future.
Jan 27 18:53:26 compute-0 network[164846]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 18:53:27 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 27 18:53:28 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 27 18:53:30 compute-0 sudo[165117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrzzukwoidryvjuwrirkbspfskycgxdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540009.9768453-153-82721673613760/AnsiballZ_dnf.py'
Jan 27 18:53:30 compute-0 sudo[165117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:30 compute-0 python3.9[165119]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:53:31 compute-0 podman[165122]: 2026-01-27 18:53:31.285479464 +0000 UTC m=+0.058741846 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 18:53:31 compute-0 podman[165121]: 2026-01-27 18:53:31.322483995 +0000 UTC m=+0.096028955 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:53:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:53:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:53:34 compute-0 systemd[1]: Reloading.
Jan 27 18:53:34 compute-0 systemd-sysv-generator[165213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:53:34 compute-0 systemd-rc-local-generator[165209]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:53:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:53:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:53:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:53:35 compute-0 systemd[1]: run-r4ce48bc13bd941dfb98b83eda7c25d9d.service: Deactivated successfully.
Jan 27 18:53:35 compute-0 sudo[165117]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:35 compute-0 sudo[165478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uugfiecgclqbmfuypcrxohkhwknaydub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540015.5774162-162-278132011864404/AnsiballZ_file.py'
Jan 27 18:53:35 compute-0 sudo[165478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:36 compute-0 python3.9[165480]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 18:53:36 compute-0 sudo[165478]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:36 compute-0 sudo[165630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsdenyqswiykjeobyqcpcohygfvpkyvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540016.2491329-170-265333557631308/AnsiballZ_modprobe.py'
Jan 27 18:53:36 compute-0 sudo[165630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:36 compute-0 python3.9[165632]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 27 18:53:36 compute-0 sudo[165630]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:37 compute-0 sudo[165786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exzprkkvaeldobedukwqyvtyekjwhqzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540017.0660295-178-64264543649230/AnsiballZ_stat.py'
Jan 27 18:53:37 compute-0 sudo[165786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:37 compute-0 python3.9[165788]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:53:37 compute-0 sudo[165786]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:37 compute-0 sudo[165909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzjkuxnpukllwxcrfnxcwkjdqboymsyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540017.0660295-178-64264543649230/AnsiballZ_copy.py'
Jan 27 18:53:37 compute-0 sudo[165909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:38 compute-0 python3.9[165911]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540017.0660295-178-64264543649230/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:38 compute-0 sudo[165909]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:38 compute-0 sudo[166061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjnpnxfxnyhplophlnarhbummkgzumu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540018.3210998-194-13614367603147/AnsiballZ_lineinfile.py'
Jan 27 18:53:38 compute-0 sudo[166061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:38 compute-0 python3.9[166063]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:38 compute-0 sudo[166061]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:39 compute-0 sudo[166213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexeikiilogaqgqhpxajmzncwfbenvmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540018.958755-202-211270123353148/AnsiballZ_systemd.py'
Jan 27 18:53:39 compute-0 sudo[166213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:39 compute-0 python3.9[166215]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:53:39 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 18:53:39 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 27 18:53:39 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 27 18:53:39 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 18:53:40 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 18:53:40 compute-0 sudo[166213]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:40 compute-0 sudo[166369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwibryujwkeaiwpvxgjhewrrpqsaumbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540020.2295022-210-226050087558339/AnsiballZ_command.py'
Jan 27 18:53:40 compute-0 sudo[166369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:40 compute-0 python3.9[166371]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:53:40 compute-0 sudo[166369]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:41 compute-0 sudo[166522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdfyhufylxlwkkuafcqkzvpdvqrevps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540020.9860413-220-222867404138183/AnsiballZ_stat.py'
Jan 27 18:53:41 compute-0 sudo[166522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:41 compute-0 python3.9[166524]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:53:41 compute-0 sudo[166522]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:42 compute-0 sudo[166674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wncysestpysbcelzehqsadxjkxjxpmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540021.9043922-229-273333056610002/AnsiballZ_stat.py'
Jan 27 18:53:42 compute-0 sudo[166674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:42 compute-0 python3.9[166676]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:53:42 compute-0 sudo[166674]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:42 compute-0 sudo[166797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkxwgnpevoexregcwqtprdgsngbclkdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540021.9043922-229-273333056610002/AnsiballZ_copy.py'
Jan 27 18:53:42 compute-0 sudo[166797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:42 compute-0 python3.9[166799]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540021.9043922-229-273333056610002/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:42 compute-0 sudo[166797]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:43 compute-0 sudo[166949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pziivgmrzcyxxghnpdxphkfaicgqblzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540023.1862345-244-183016619248893/AnsiballZ_command.py'
Jan 27 18:53:43 compute-0 sudo[166949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:43 compute-0 python3.9[166951]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:53:43 compute-0 sudo[166949]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:44 compute-0 sudo[167102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meuwtznhtuwzwbbjufgeouheiqjkqnpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540023.8893936-252-194389418280087/AnsiballZ_lineinfile.py'
Jan 27 18:53:44 compute-0 sudo[167102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:44 compute-0 python3.9[167104]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:44 compute-0 sudo[167102]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:45 compute-0 sudo[167254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snbmieecvgzlylpmpbkatslehicxzlkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540024.553172-260-273628174237210/AnsiballZ_replace.py'
Jan 27 18:53:45 compute-0 sudo[167254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:45 compute-0 python3.9[167256]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:45 compute-0 sudo[167254]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:45 compute-0 sudo[167406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eovkfqfnupfmjofadhxjkpdjyropjtkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540025.3708274-268-167398453041573/AnsiballZ_replace.py'
Jan 27 18:53:45 compute-0 sudo[167406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:45 compute-0 python3.9[167408]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:45 compute-0 sudo[167406]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:46 compute-0 sudo[167558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bflixbcnrykcomofjrrumxvwiefegpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540026.0729058-277-206512962649525/AnsiballZ_lineinfile.py'
Jan 27 18:53:46 compute-0 sudo[167558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:46 compute-0 python3.9[167560]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:46 compute-0 sudo[167558]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:46 compute-0 sudo[167710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeiajknjvikkwkwmruhhiexgkmqdjshr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540026.6765049-277-71477143999870/AnsiballZ_lineinfile.py'
Jan 27 18:53:46 compute-0 sudo[167710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:47 compute-0 python3.9[167712]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:47 compute-0 sudo[167710]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:47 compute-0 sudo[167862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzipvqfnwzqonyoxatlxjhirvynldgml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540027.3299665-277-149549131364043/AnsiballZ_lineinfile.py'
Jan 27 18:53:47 compute-0 sudo[167862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:47 compute-0 python3.9[167864]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:47 compute-0 sudo[167862]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:48 compute-0 sudo[168014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bejrlscitezbodakxtybnphcwmovezcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540028.014034-277-112120209910099/AnsiballZ_lineinfile.py'
Jan 27 18:53:48 compute-0 sudo[168014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:48 compute-0 python3.9[168016]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:48 compute-0 sudo[168014]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:48 compute-0 sudo[168166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwmvsgecekzenemwtgqnvspoeicnrfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540028.6540859-306-145178577326184/AnsiballZ_stat.py'
Jan 27 18:53:48 compute-0 sudo[168166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:49 compute-0 python3.9[168168]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:53:49 compute-0 sudo[168166]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:49 compute-0 sudo[168320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlicfytfyytalzfqxcdyeqqysucavkth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540029.3652332-314-7551199886512/AnsiballZ_command.py'
Jan 27 18:53:49 compute-0 sudo[168320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:49 compute-0 python3.9[168322]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:53:49 compute-0 sudo[168320]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:50 compute-0 sudo[168473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyneiclksslroagrleutaqavsctshwwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540030.0418358-323-87896416070471/AnsiballZ_systemd_service.py'
Jan 27 18:53:50 compute-0 sudo[168473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:50 compute-0 python3.9[168475]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:53:50 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 27 18:53:50 compute-0 sudo[168473]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:51 compute-0 sudo[168629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmfgmsojibonatxnzjtahtgqpjgrobzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540030.90184-331-144209817015891/AnsiballZ_systemd_service.py'
Jan 27 18:53:51 compute-0 sudo[168629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:51 compute-0 python3.9[168631]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:53:51 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 27 18:53:51 compute-0 udevadm[168636]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 27 18:53:51 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 27 18:53:51 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 18:53:51 compute-0 multipathd[168640]: --------start up--------
Jan 27 18:53:51 compute-0 multipathd[168640]: read /etc/multipath.conf
Jan 27 18:53:51 compute-0 multipathd[168640]: path checkers start up
Jan 27 18:53:51 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 18:53:51 compute-0 sudo[168629]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:52 compute-0 sudo[168797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyxpmofoxmwggdwzormbydaszannpaxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540032.0550058-343-247734589497872/AnsiballZ_file.py'
Jan 27 18:53:52 compute-0 sudo[168797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:52 compute-0 python3.9[168799]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 18:53:52 compute-0 sudo[168797]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:53 compute-0 sudo[168949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdymdnscqfetznqptisvqiwwhrjmkgoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540032.758088-351-221118162054088/AnsiballZ_modprobe.py'
Jan 27 18:53:53 compute-0 sudo[168949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:53 compute-0 python3.9[168951]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 27 18:53:53 compute-0 kernel: Key type psk registered
Jan 27 18:53:53 compute-0 sudo[168949]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:53 compute-0 sudo[169113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjxlthcvjrbdtiuibrsbwrldwbuwqms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540033.4595969-359-63071530286497/AnsiballZ_stat.py'
Jan 27 18:53:53 compute-0 sudo[169113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:54 compute-0 python3.9[169115]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:53:54 compute-0 sudo[169113]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:54 compute-0 sudo[169236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmvdhwkxjkzsbkglozvqduacfzjxjgjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540033.4595969-359-63071530286497/AnsiballZ_copy.py'
Jan 27 18:53:54 compute-0 sudo[169236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:54 compute-0 python3.9[169238]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540033.4595969-359-63071530286497/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:54 compute-0 sudo[169236]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:55 compute-0 sudo[169388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogyxsjvddzqycxzhtguovfhxbbqdgsgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540034.9447193-375-38780791759982/AnsiballZ_lineinfile.py'
Jan 27 18:53:55 compute-0 sudo[169388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:55 compute-0 python3.9[169390]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:53:55 compute-0 sudo[169388]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:56 compute-0 sudo[169540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xepwjzbbjppjgjcxxjhotrxcxlmskrlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540035.9464085-383-275725405451219/AnsiballZ_systemd.py'
Jan 27 18:53:56 compute-0 sudo[169540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:56 compute-0 python3.9[169542]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:53:56 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 18:53:56 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 27 18:53:56 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 27 18:53:56 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 18:53:56 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 18:53:56 compute-0 sudo[169540]: pam_unix(sudo:session): session closed for user root
Jan 27 18:53:57 compute-0 sudo[169696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfjtbvxjfjhsreifoyosrvqvjvamvywg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540036.8904452-391-135420200509462/AnsiballZ_dnf.py'
Jan 27 18:53:57 compute-0 sudo[169696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:53:57 compute-0 python3.9[169698]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 18:53:59 compute-0 sshd-session[169702]: Invalid user sol from 45.148.10.240 port 37174
Jan 27 18:53:59 compute-0 sshd-session[169702]: Connection closed by invalid user sol 45.148.10.240 port 37174 [preauth]
Jan 27 18:54:02 compute-0 podman[169706]: 2026-01-27 18:54:02.29527991 +0000 UTC m=+0.069016163 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 18:54:02 compute-0 podman[169705]: 2026-01-27 18:54:02.328324325 +0000 UTC m=+0.102112479 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 18:54:02 compute-0 systemd[1]: Reloading.
Jan 27 18:54:02 compute-0 systemd-rc-local-generator[169777]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:54:02 compute-0 systemd-sysv-generator[169782]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:54:02 compute-0 systemd[1]: Reloading.
Jan 27 18:54:02 compute-0 systemd-rc-local-generator[169813]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:54:02 compute-0 systemd-sysv-generator[169817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:54:03 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 18:54:03 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 18:54:03 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 18:54:03 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 18:54:03 compute-0 systemd[1]: Reloading.
Jan 27 18:54:03 compute-0 systemd-rc-local-generator[169908]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:54:03 compute-0 systemd-sysv-generator[169912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:54:03 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 18:54:04 compute-0 sudo[169696]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:04 compute-0 sudo[171143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnfywolprpikeaxfmnjtnqnrckexxbnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540044.4740777-399-252412139999166/AnsiballZ_systemd_service.py'
Jan 27 18:54:04 compute-0 sudo[171143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:04 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 18:54:04 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 18:54:04 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.716s CPU time.
Jan 27 18:54:04 compute-0 systemd[1]: run-r38e5f1c3796445d19dd551254cca034f.service: Deactivated successfully.
Jan 27 18:54:05 compute-0 python3.9[171165]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:54:05 compute-0 iscsid[164665]: iscsid shutting down.
Jan 27 18:54:05 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 27 18:54:05 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 27 18:54:05 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 27 18:54:05 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 18:54:05 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 27 18:54:05 compute-0 systemd[1]: Started Open-iSCSI.
Jan 27 18:54:05 compute-0 sudo[171143]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:05 compute-0 sudo[171364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utglqydmzcoknkghgiihjaoiqvyyesji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540045.3348794-407-263360951745888/AnsiballZ_systemd_service.py'
Jan 27 18:54:05 compute-0 sudo[171364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:05 compute-0 python3.9[171366]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:54:05 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 27 18:54:05 compute-0 multipathd[168640]: exit (signal)
Jan 27 18:54:05 compute-0 multipathd[168640]: --------shut down-------
Jan 27 18:54:05 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 27 18:54:06 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 27 18:54:06 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 18:54:06 compute-0 multipathd[171372]: --------start up--------
Jan 27 18:54:06 compute-0 multipathd[171372]: read /etc/multipath.conf
Jan 27 18:54:06 compute-0 multipathd[171372]: path checkers start up
Jan 27 18:54:06 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 18:54:06 compute-0 sudo[171364]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:06 compute-0 python3.9[171529]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:54:07 compute-0 sudo[171683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxxkecnvvyjihhlptzbwplxtenwrlyfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540047.3306413-425-78223243021375/AnsiballZ_file.py'
Jan 27 18:54:07 compute-0 sudo[171683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:07 compute-0 python3.9[171685]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:07 compute-0 sudo[171683]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:08 compute-0 sudo[171835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdgzhjbovhbpqnyzkicmxkqafrygtsoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540048.163473-436-262026375732204/AnsiballZ_systemd_service.py'
Jan 27 18:54:08 compute-0 sudo[171835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:08 compute-0 python3.9[171837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:54:08 compute-0 systemd[1]: Reloading.
Jan 27 18:54:08 compute-0 systemd-rc-local-generator[171863]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:54:08 compute-0 systemd-sysv-generator[171866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:54:09 compute-0 sudo[171835]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:09 compute-0 python3.9[172022]: ansible-ansible.builtin.service_facts Invoked
Jan 27 18:54:09 compute-0 network[172039]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 18:54:09 compute-0 network[172040]: 'network-scripts' will be removed from distribution in near future.
Jan 27 18:54:09 compute-0 network[172041]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 18:54:14 compute-0 sudo[172311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixloyuzjlnzsbepgoujqhfmacoryqatr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540054.252767-455-247704809042195/AnsiballZ_systemd_service.py'
Jan 27 18:54:14 compute-0 sudo[172311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:14 compute-0 python3.9[172313]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:54:14 compute-0 sudo[172311]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:15 compute-0 sudo[172464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqjfqcifslcmidcehvcwlpuucbfkzajd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540055.0760086-455-192134355906755/AnsiballZ_systemd_service.py'
Jan 27 18:54:15 compute-0 sudo[172464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:15 compute-0 python3.9[172466]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:54:15 compute-0 sudo[172464]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:16 compute-0 sudo[172617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kftwvexlsdzmbztgrkiqtolvrmkhancz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540055.8372707-455-129190666933835/AnsiballZ_systemd_service.py'
Jan 27 18:54:16 compute-0 sudo[172617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:16 compute-0 python3.9[172619]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:54:16 compute-0 sudo[172617]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:16 compute-0 sudo[172770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqjqtjgljvmarsrvyehqkbhrmdmapaoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540056.5992591-455-231937768711715/AnsiballZ_systemd_service.py'
Jan 27 18:54:16 compute-0 sudo[172770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:17 compute-0 python3.9[172772]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:54:17 compute-0 sudo[172770]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:17 compute-0 sudo[172923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npnikqxunupelxdhdafwgnienhzzyuvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540057.3837707-455-112184589947389/AnsiballZ_systemd_service.py'
Jan 27 18:54:17 compute-0 sudo[172923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:18 compute-0 python3.9[172925]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:54:18 compute-0 sudo[172923]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:18 compute-0 sudo[173076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvmegtvyaetrpxowxwgoippqzsuhisbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540058.2973578-455-119033782978962/AnsiballZ_systemd_service.py'
Jan 27 18:54:18 compute-0 sudo[173076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:18 compute-0 python3.9[173078]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:54:18 compute-0 sudo[173076]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:19 compute-0 sudo[173229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvviejnfzolxgegkdjpylahzihifqzkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540059.1253808-455-5229315187938/AnsiballZ_systemd_service.py'
Jan 27 18:54:19 compute-0 sudo[173229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:19 compute-0 python3.9[173231]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:54:19 compute-0 sudo[173229]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:20 compute-0 sudo[173382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pteaqlgeptudwimwkmntehwfgprkqxlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540059.9801521-455-157268693231429/AnsiballZ_systemd_service.py'
Jan 27 18:54:20 compute-0 sudo[173382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:54:20.491 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:54:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:54:20.493 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:54:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:54:20.493 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:54:20 compute-0 python3.9[173384]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:54:20 compute-0 sudo[173382]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:21 compute-0 sudo[173535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bztydvfjmxejrodlusvyrfkpfioalsre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540061.5218399-514-250433020365509/AnsiballZ_file.py'
Jan 27 18:54:21 compute-0 sudo[173535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:22 compute-0 python3.9[173537]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:22 compute-0 sudo[173535]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:22 compute-0 sudo[173687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qptmfytzyluantkqhbrzeqlzoacktmdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540062.1956127-514-173896800931759/AnsiballZ_file.py'
Jan 27 18:54:22 compute-0 sudo[173687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:22 compute-0 python3.9[173689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:22 compute-0 sudo[173687]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:23 compute-0 sudo[173839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayczlazwrcdpeyykpblpmifykdrzqmlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540062.796351-514-235323446988892/AnsiballZ_file.py'
Jan 27 18:54:23 compute-0 sudo[173839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:23 compute-0 python3.9[173841]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:23 compute-0 sudo[173839]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:23 compute-0 sudo[173991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmtvbtzpfasagtwflpudcrhadizpbxku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540063.4280515-514-210889017523165/AnsiballZ_file.py'
Jan 27 18:54:23 compute-0 sudo[173991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:23 compute-0 python3.9[173993]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:23 compute-0 sudo[173991]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:24 compute-0 sudo[174143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhhnaoqcqmbnjuauxnfuimnisffkvmav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540064.0439496-514-223300360242009/AnsiballZ_file.py'
Jan 27 18:54:24 compute-0 sudo[174143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:24 compute-0 python3.9[174145]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:24 compute-0 sudo[174143]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:24 compute-0 sudo[174295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybmcrqxjwmumixxpipkvjwzznwfuumkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540064.6919706-514-239510467531751/AnsiballZ_file.py'
Jan 27 18:54:24 compute-0 sudo[174295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:25 compute-0 python3.9[174297]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:25 compute-0 sudo[174295]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:25 compute-0 sudo[174447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfitljmwjvqedyvtoqyjrlrcuquaknk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540065.349959-514-246529978868508/AnsiballZ_file.py'
Jan 27 18:54:25 compute-0 sudo[174447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:25 compute-0 python3.9[174449]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:25 compute-0 sudo[174447]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:26 compute-0 sudo[174599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfrdinheszevqierdzsttsmthoguuawl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540065.9823265-514-252342728087168/AnsiballZ_file.py'
Jan 27 18:54:26 compute-0 sudo[174599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:26 compute-0 python3.9[174601]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:26 compute-0 sudo[174599]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:26 compute-0 sudo[174751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pohhqndlepdudctmnoobrfzrotetgmlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540066.680963-571-29771175385797/AnsiballZ_file.py'
Jan 27 18:54:26 compute-0 sudo[174751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:27 compute-0 python3.9[174753]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:27 compute-0 sudo[174751]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:27 compute-0 sudo[174903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppwgxehmdpaaslpiqsrbmlxuusfubcqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540067.3129816-571-1418365851153/AnsiballZ_file.py'
Jan 27 18:54:27 compute-0 sudo[174903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:27 compute-0 python3.9[174905]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:27 compute-0 sudo[174903]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:28 compute-0 sudo[175055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdqyqwdgofmedgyarpxmajjsldbihhgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540067.945058-571-260938731601134/AnsiballZ_file.py'
Jan 27 18:54:28 compute-0 sudo[175055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:28 compute-0 python3.9[175057]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:28 compute-0 sudo[175055]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:28 compute-0 sudo[175207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxsndsvireryiafyqclahlcosfqjsnuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540068.5926447-571-61894088457265/AnsiballZ_file.py'
Jan 27 18:54:28 compute-0 sudo[175207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:29 compute-0 python3.9[175209]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:29 compute-0 sudo[175207]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:29 compute-0 sudo[175359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwqynogbcssnulctlcmlxlcknjctkocl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540069.2433183-571-135090485029531/AnsiballZ_file.py'
Jan 27 18:54:29 compute-0 sudo[175359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:29 compute-0 python3.9[175361]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:29 compute-0 sudo[175359]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:30 compute-0 sudo[175511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcjqasugyskmlvtzmypxjobxqdzmzvvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540070.0560362-571-208640721072413/AnsiballZ_file.py'
Jan 27 18:54:30 compute-0 sudo[175511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:30 compute-0 python3.9[175513]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:30 compute-0 sudo[175511]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:30 compute-0 sudo[175663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkdfankbvvslqnwnbgduaocztftfqhrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540070.7065303-571-210455392054275/AnsiballZ_file.py'
Jan 27 18:54:30 compute-0 sudo[175663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:31 compute-0 python3.9[175665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:31 compute-0 sudo[175663]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:31 compute-0 sudo[175815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isldfhyblrvhsdwxobjcereeixkbyznx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540071.3849034-571-78761402392076/AnsiballZ_file.py'
Jan 27 18:54:31 compute-0 sudo[175815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:31 compute-0 python3.9[175817]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:54:31 compute-0 sudo[175815]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:32 compute-0 sudo[175994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbgvndgtevqtrjoibvjoihndpqpzayv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540072.141806-629-122442309652523/AnsiballZ_command.py'
Jan 27 18:54:32 compute-0 sudo[175994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:32 compute-0 podman[175942]: 2026-01-27 18:54:32.506788198 +0000 UTC m=+0.069255589 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 18:54:32 compute-0 podman[175941]: 2026-01-27 18:54:32.551476924 +0000 UTC m=+0.113912345 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:54:32 compute-0 python3.9[176007]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:32 compute-0 sudo[175994]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:33 compute-0 python3.9[176166]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 18:54:34 compute-0 sudo[176316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdyrsfqdbrkbsguoinlqwainmddycoxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540073.832945-647-104469643179576/AnsiballZ_systemd_service.py'
Jan 27 18:54:34 compute-0 sudo[176316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:34 compute-0 python3.9[176318]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:54:34 compute-0 systemd[1]: Reloading.
Jan 27 18:54:34 compute-0 systemd-sysv-generator[176350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:54:34 compute-0 systemd-rc-local-generator[176346]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:54:34 compute-0 sudo[176316]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:35 compute-0 sudo[176504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beslwrrxqhkieugnpdjavbvdllsfzhgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540074.8999875-655-122156073013135/AnsiballZ_command.py'
Jan 27 18:54:35 compute-0 sudo[176504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:35 compute-0 python3.9[176506]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:35 compute-0 sudo[176504]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:35 compute-0 sudo[176657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhkyhnvngwkoodgxidelnqhngycpsjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540075.5764117-655-116005169263529/AnsiballZ_command.py'
Jan 27 18:54:35 compute-0 sudo[176657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:36 compute-0 python3.9[176659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:36 compute-0 sudo[176657]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:36 compute-0 sudo[176810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwttrzpvcdezdheaejwbkfhumktgglim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540076.219382-655-167478132775443/AnsiballZ_command.py'
Jan 27 18:54:36 compute-0 sudo[176810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:36 compute-0 python3.9[176812]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:36 compute-0 sudo[176810]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:37 compute-0 sudo[176963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pprxyaiashukibeosxiduucwjuugjazk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540076.880997-655-262989386655714/AnsiballZ_command.py'
Jan 27 18:54:37 compute-0 sudo[176963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:37 compute-0 python3.9[176965]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:37 compute-0 sudo[176963]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:37 compute-0 sudo[177116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddviutnzjedflwhdmnwqedswfbwvbono ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540077.5128644-655-13311083223552/AnsiballZ_command.py'
Jan 27 18:54:37 compute-0 sudo[177116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:37 compute-0 python3.9[177118]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:37 compute-0 sudo[177116]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:38 compute-0 sudo[177269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhijqmvbhtloydrtlnhngxlscbxlniol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540078.108675-655-234294043437673/AnsiballZ_command.py'
Jan 27 18:54:38 compute-0 sudo[177269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:38 compute-0 python3.9[177271]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:38 compute-0 sudo[177269]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:39 compute-0 sudo[177422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqodqknkbiblzxyzdkjtanccquvyahki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540078.7387252-655-200928855643236/AnsiballZ_command.py'
Jan 27 18:54:39 compute-0 sudo[177422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:39 compute-0 python3.9[177424]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:39 compute-0 sudo[177422]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:39 compute-0 sudo[177575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbxgeuguvlguzwswtvckapubrzmuqzbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540079.4058223-655-254647523021161/AnsiballZ_command.py'
Jan 27 18:54:39 compute-0 sudo[177575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:39 compute-0 python3.9[177577]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:54:39 compute-0 sudo[177575]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:41 compute-0 sudo[177728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udwonvvvvxawltdicmchrdfnjsvaahgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540080.8700693-734-7587757606138/AnsiballZ_file.py'
Jan 27 18:54:41 compute-0 sudo[177728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:41 compute-0 python3.9[177730]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:41 compute-0 sudo[177728]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:41 compute-0 sudo[177880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtjxnqzcvrzijcnggvydzlxcavxtjsuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540081.507819-734-150328297383094/AnsiballZ_file.py'
Jan 27 18:54:41 compute-0 sudo[177880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:42 compute-0 python3.9[177882]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:42 compute-0 sudo[177880]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:42 compute-0 sudo[178032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkaqhrrdzxhkvriaxydrmxidijognpsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540082.240259-734-203219140740961/AnsiballZ_file.py'
Jan 27 18:54:42 compute-0 sudo[178032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:42 compute-0 python3.9[178034]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:42 compute-0 sudo[178032]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:43 compute-0 sudo[178184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkdxmcvanzseywdescwnuahcvszsfwcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540082.9269264-756-1836807385176/AnsiballZ_file.py'
Jan 27 18:54:43 compute-0 sudo[178184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:43 compute-0 python3.9[178186]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:43 compute-0 sudo[178184]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:43 compute-0 sudo[178336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysngkjftjoeemhdqcxzatpyovrnqesbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540083.5501087-756-159542523537532/AnsiballZ_file.py'
Jan 27 18:54:43 compute-0 sudo[178336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:44 compute-0 python3.9[178338]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:44 compute-0 sudo[178336]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:44 compute-0 sudo[178488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trsvmbozrsiveirfwirgepninzqpjgkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540084.2245991-756-53625016000159/AnsiballZ_file.py'
Jan 27 18:54:44 compute-0 sudo[178488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:44 compute-0 python3.9[178490]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:44 compute-0 sudo[178488]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:45 compute-0 sudo[178640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybvygfragguezivzhmkgmfkxvqptxltv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540084.8673942-756-249429893397687/AnsiballZ_file.py'
Jan 27 18:54:45 compute-0 sudo[178640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:45 compute-0 python3.9[178642]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:45 compute-0 sudo[178640]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:45 compute-0 sudo[178792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfaaalxgfzxvonnurmamysctdsgomvub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540085.4996808-756-145566079745183/AnsiballZ_file.py'
Jan 27 18:54:45 compute-0 sudo[178792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:45 compute-0 python3.9[178794]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:45 compute-0 sudo[178792]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:46 compute-0 sudo[178944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krxccjkfaoayeycfyvijauyxnpngbxbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540086.1357572-756-86435318007197/AnsiballZ_file.py'
Jan 27 18:54:46 compute-0 sudo[178944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:46 compute-0 python3.9[178946]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:46 compute-0 sudo[178944]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:47 compute-0 sudo[179096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdjicpmgyudelsfqcaxsttxnzhfxbmlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540086.8540373-756-151889843943175/AnsiballZ_file.py'
Jan 27 18:54:47 compute-0 sudo[179096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:47 compute-0 python3.9[179098]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:47 compute-0 sudo[179096]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:52 compute-0 sudo[179248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgvkzgxxnnuhriswgftoaejbzxazoamv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540091.5389783-925-9755738967258/AnsiballZ_getent.py'
Jan 27 18:54:52 compute-0 sudo[179248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:52 compute-0 python3.9[179250]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 27 18:54:52 compute-0 sudo[179248]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:52 compute-0 sudo[179401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnmgbvzcwdtvmwkssmvewqowljyqcpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540092.4462156-933-176698624789977/AnsiballZ_group.py'
Jan 27 18:54:52 compute-0 sudo[179401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:53 compute-0 python3.9[179403]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 18:54:53 compute-0 groupadd[179404]: group added to /etc/group: name=nova, GID=42436
Jan 27 18:54:53 compute-0 groupadd[179404]: group added to /etc/gshadow: name=nova
Jan 27 18:54:53 compute-0 groupadd[179404]: new group: name=nova, GID=42436
Jan 27 18:54:53 compute-0 sudo[179401]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:53 compute-0 sudo[179559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huamdopwrdzfylkvtcktmwwlshydvbrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540093.3108072-941-154655416928648/AnsiballZ_user.py'
Jan 27 18:54:53 compute-0 sudo[179559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:54:54 compute-0 python3.9[179561]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 18:54:54 compute-0 useradd[179563]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 27 18:54:54 compute-0 useradd[179563]: add 'nova' to group 'libvirt'
Jan 27 18:54:54 compute-0 useradd[179563]: add 'nova' to shadow group 'libvirt'
Jan 27 18:54:54 compute-0 sudo[179559]: pam_unix(sudo:session): session closed for user root
Jan 27 18:54:55 compute-0 sshd-session[179594]: Accepted publickey for zuul from 192.168.122.31 port 40020 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:54:55 compute-0 systemd-logind[795]: New session 24 of user zuul.
Jan 27 18:54:55 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 27 18:54:55 compute-0 sshd-session[179594]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:54:55 compute-0 sshd-session[179597]: Received disconnect from 192.168.122.31 port 40020:11: disconnected by user
Jan 27 18:54:55 compute-0 sshd-session[179597]: Disconnected from user zuul 192.168.122.31 port 40020
Jan 27 18:54:55 compute-0 sshd-session[179594]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:54:55 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 27 18:54:55 compute-0 systemd-logind[795]: Session 24 logged out. Waiting for processes to exit.
Jan 27 18:54:55 compute-0 systemd-logind[795]: Removed session 24.
Jan 27 18:54:55 compute-0 python3.9[179747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:54:56 compute-0 python3.9[179868]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540095.4460242-966-205778830242601/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:57 compute-0 python3.9[180018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:54:57 compute-0 python3.9[180094]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:54:58 compute-0 python3.9[180244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:54:59 compute-0 python3.9[180365]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540097.902909-966-176502658078407/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:55:00 compute-0 python3.9[180515]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:00 compute-0 python3.9[180636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540099.4434838-966-255732119075185/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:55:01 compute-0 python3.9[180786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:01 compute-0 python3.9[180907]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540100.8005464-966-260805983210376/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:55:02 compute-0 python3.9[181057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:02 compute-0 podman[181153]: 2026-01-27 18:55:02.976792709 +0000 UTC m=+0.059565523 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 18:55:03 compute-0 podman[181152]: 2026-01-27 18:55:03.012082327 +0000 UTC m=+0.098634884 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 27 18:55:03 compute-0 python3.9[181201]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540102.0279863-966-208965411227371/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:55:03 compute-0 sudo[181371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjfmpuqwieejotbpndunfsshjiqofjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540103.4597173-1049-135907078653026/AnsiballZ_file.py'
Jan 27 18:55:03 compute-0 sudo[181371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:03 compute-0 python3.9[181373]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:03 compute-0 sudo[181371]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:04 compute-0 sudo[181523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrsllhbkmdgsykviximltzlmwqvtrtvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540104.1088526-1057-91586727862843/AnsiballZ_copy.py'
Jan 27 18:55:04 compute-0 sudo[181523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:04 compute-0 python3.9[181525]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:04 compute-0 sudo[181523]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:05 compute-0 sudo[181675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfefxjzmeehloqyzzoiclrtnbqgcqtvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540104.7458987-1065-199734920534731/AnsiballZ_stat.py'
Jan 27 18:55:05 compute-0 sudo[181675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:05 compute-0 python3.9[181677]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:55:05 compute-0 sudo[181675]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:05 compute-0 sudo[181827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgjmkivoqsqcrxaasoosyblpeegstmaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540105.4421468-1073-177609229936161/AnsiballZ_stat.py'
Jan 27 18:55:05 compute-0 sudo[181827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:06 compute-0 python3.9[181829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:06 compute-0 sudo[181827]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:06 compute-0 sudo[181950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzwelqqmodmyslvyosdbskqxlmdrwkid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540105.4421468-1073-177609229936161/AnsiballZ_copy.py'
Jan 27 18:55:06 compute-0 sudo[181950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:06 compute-0 python3.9[181952]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769540105.4421468-1073-177609229936161/.source _original_basename=.2plljs_s follow=False checksum=0a78dc841ef555729cec9997b9eb0b967a809d0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 27 18:55:06 compute-0 sudo[181950]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:07 compute-0 python3.9[182104]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:55:08 compute-0 python3.9[182256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:08 compute-0 python3.9[182377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540107.7222533-1099-130081997423568/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:55:09 compute-0 python3.9[182527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:10 compute-0 python3.9[182648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540109.068854-1114-256129657611867/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:55:10 compute-0 sudo[182798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roopeuktvnuzcyszcxkundpwanxuncby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540110.4439662-1131-256821475914291/AnsiballZ_container_config_data.py'
Jan 27 18:55:10 compute-0 sudo[182798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:11 compute-0 python3.9[182800]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 27 18:55:11 compute-0 sudo[182798]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:11 compute-0 sudo[182950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvzoekoukopsnxxmmcjrfjacfhbjncoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540111.4679587-1142-251037854538063/AnsiballZ_container_config_hash.py'
Jan 27 18:55:11 compute-0 sudo[182950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:12 compute-0 python3.9[182952]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:55:12 compute-0 sudo[182950]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:13 compute-0 sudo[183102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkrhaefxpsbuspxbzgstcenyfvnxjanv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540112.686934-1152-35287840124171/AnsiballZ_edpm_container_manage.py'
Jan 27 18:55:13 compute-0 sudo[183102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:13 compute-0 python3[183104]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:55:13 compute-0 podman[183140]: 2026-01-27 18:55:13.81555003 +0000 UTC m=+0.114380906 container create 5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm)
Jan 27 18:55:13 compute-0 podman[183140]: 2026-01-27 18:55:13.72789333 +0000 UTC m=+0.026724226 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 18:55:13 compute-0 python3[183104]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 27 18:55:13 compute-0 sudo[183102]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:14 compute-0 sudo[183327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmymvbhlyipdfrbtmsyshjvogjcvfxcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540114.2133348-1160-96470605808074/AnsiballZ_stat.py'
Jan 27 18:55:14 compute-0 sudo[183327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:14 compute-0 python3.9[183329]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:55:14 compute-0 sudo[183327]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:15 compute-0 sudo[183481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgqwrjbqrhcsqdozfluioivhqbiwkmrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540115.2683427-1172-40564028174181/AnsiballZ_container_config_data.py'
Jan 27 18:55:15 compute-0 sudo[183481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:15 compute-0 python3.9[183483]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 27 18:55:15 compute-0 sudo[183481]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:16 compute-0 sudo[183633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozneofenugwsgmpvkaettnxqhidkvwme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540116.063632-1183-274037185999213/AnsiballZ_container_config_hash.py'
Jan 27 18:55:16 compute-0 sudo[183633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:16 compute-0 python3.9[183635]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:55:16 compute-0 sudo[183633]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:17 compute-0 sudo[183785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvnlquyvoemwlfksjjvoxyydccnsecbw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540116.9342208-1193-177514101722598/AnsiballZ_edpm_container_manage.py'
Jan 27 18:55:17 compute-0 sudo[183785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:17 compute-0 python3[183787]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:55:17 compute-0 podman[183823]: 2026-01-27 18:55:17.829399119 +0000 UTC m=+0.072032323 container create 89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, config_id=edpm)
Jan 27 18:55:17 compute-0 podman[183823]: 2026-01-27 18:55:17.78843884 +0000 UTC m=+0.031072144 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 18:55:17 compute-0 python3[183787]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 27 18:55:18 compute-0 sudo[183785]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:18 compute-0 sudo[184011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqvorvsbsgxjsfccgiactczdeuemqnct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540118.1906216-1201-182880141408133/AnsiballZ_stat.py'
Jan 27 18:55:18 compute-0 sudo[184011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:18 compute-0 python3.9[184013]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:55:18 compute-0 sudo[184011]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:19 compute-0 sudo[184165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftawqiqiemtkdpkrxflkkkkrbbalhurm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540118.9481952-1210-281219337779631/AnsiballZ_file.py'
Jan 27 18:55:19 compute-0 sudo[184165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:19 compute-0 python3.9[184167]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:19 compute-0 sudo[184165]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:20 compute-0 sudo[184316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezbvabbepffqaahwxlfqjojqovzdsdyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540119.5848703-1210-26573252065105/AnsiballZ_copy.py'
Jan 27 18:55:20 compute-0 sudo[184316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:20 compute-0 python3.9[184318]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769540119.5848703-1210-26573252065105/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:20 compute-0 sudo[184316]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:20 compute-0 sudo[184392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ondjvhhhtcdxqnzwhncuarqntplkjmwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540119.5848703-1210-26573252065105/AnsiballZ_systemd.py'
Jan 27 18:55:20 compute-0 sudo[184392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:55:20.493 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:55:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:55:20.494 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:55:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:55:20.495 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:55:20 compute-0 python3.9[184394]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:55:20 compute-0 systemd[1]: Reloading.
Jan 27 18:55:20 compute-0 systemd-sysv-generator[184426]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:55:20 compute-0 systemd-rc-local-generator[184423]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:55:21 compute-0 sudo[184392]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:21 compute-0 sudo[184504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqwrwwraverwyegwsxtzqtqpacmrxazf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540119.5848703-1210-26573252065105/AnsiballZ_systemd.py'
Jan 27 18:55:21 compute-0 sudo[184504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:21 compute-0 python3.9[184506]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:55:21 compute-0 systemd[1]: Reloading.
Jan 27 18:55:22 compute-0 systemd-sysv-generator[184538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:55:22 compute-0 systemd-rc-local-generator[184534]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:55:22 compute-0 systemd[1]: Starting nova_compute container...
Jan 27 18:55:22 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:22 compute-0 podman[184546]: 2026-01-27 18:55:22.553464985 +0000 UTC m=+0.139539962 container init 89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Jan 27 18:55:22 compute-0 podman[184546]: 2026-01-27 18:55:22.565777491 +0000 UTC m=+0.151852378 container start 89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 18:55:22 compute-0 nova_compute[184561]: + sudo -E kolla_set_configs
Jan 27 18:55:22 compute-0 podman[184546]: nova_compute
Jan 27 18:55:22 compute-0 systemd[1]: Started nova_compute container.
Jan 27 18:55:22 compute-0 sudo[184504]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Validating config file
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying service configuration files
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Deleting /etc/ceph
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Creating directory /etc/ceph
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Writing out command to execute
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 18:55:22 compute-0 nova_compute[184561]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 18:55:22 compute-0 nova_compute[184561]: ++ cat /run_command
Jan 27 18:55:22 compute-0 nova_compute[184561]: + CMD=nova-compute
Jan 27 18:55:22 compute-0 nova_compute[184561]: + ARGS=
Jan 27 18:55:22 compute-0 nova_compute[184561]: + sudo kolla_copy_cacerts
Jan 27 18:55:22 compute-0 nova_compute[184561]: + [[ ! -n '' ]]
Jan 27 18:55:22 compute-0 nova_compute[184561]: + . kolla_extend_start
Jan 27 18:55:22 compute-0 nova_compute[184561]: Running command: 'nova-compute'
Jan 27 18:55:22 compute-0 nova_compute[184561]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 18:55:22 compute-0 nova_compute[184561]: + umask 0022
Jan 27 18:55:22 compute-0 nova_compute[184561]: + exec nova-compute
Jan 27 18:55:23 compute-0 python3.9[184722]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:55:24 compute-0 python3.9[184873]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:55:24 compute-0 nova_compute[184561]: 2026-01-27 18:55:24.774 184565 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 18:55:24 compute-0 nova_compute[184561]: 2026-01-27 18:55:24.775 184565 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 18:55:24 compute-0 nova_compute[184561]: 2026-01-27 18:55:24.775 184565 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 18:55:24 compute-0 nova_compute[184561]: 2026-01-27 18:55:24.775 184565 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 27 18:55:24 compute-0 python3.9[185025]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:55:24 compute-0 nova_compute[184561]: 2026-01-27 18:55:24.909 184565 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 18:55:24 compute-0 nova_compute[184561]: 2026-01-27 18:55:24.937 184565 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 18:55:24 compute-0 nova_compute[184561]: 2026-01-27 18:55:24.937 184565 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.558 184565 INFO nova.virt.driver [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.677 184565 INFO nova.compute.provider_config [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.691 184565 DEBUG oslo_concurrency.lockutils [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.692 184565 DEBUG oslo_concurrency.lockutils [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.692 184565 DEBUG oslo_concurrency.lockutils [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.692 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.693 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.693 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.693 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.693 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.693 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.694 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.694 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.694 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.694 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.695 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.695 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.695 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.695 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.695 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.696 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.696 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.696 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.696 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.696 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.697 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.697 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.697 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.697 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.698 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.698 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.698 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.698 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.698 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.699 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.699 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.699 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.699 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.699 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.700 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.700 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.700 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.700 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.700 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.701 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.701 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.701 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.701 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.701 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.702 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.702 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.702 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.702 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.702 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.703 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.703 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.703 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.703 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.703 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.704 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.704 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.704 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.704 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.704 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.705 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.705 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.705 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.705 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.705 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.706 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.706 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.706 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.706 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.706 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.706 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.707 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.707 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.707 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.707 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.707 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.708 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.708 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.708 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.708 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.708 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.709 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.709 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.709 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.709 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.709 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.710 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.710 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.710 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.710 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.710 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.711 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.711 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.711 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.711 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.711 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.712 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.712 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.712 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.712 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.712 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.712 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.713 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.713 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.713 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.713 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.713 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.714 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.714 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.714 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.714 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.714 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.715 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.715 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.715 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.715 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.715 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.716 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.716 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.716 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.716 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.716 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.716 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.717 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.717 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.717 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.717 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.717 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.718 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.718 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.718 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.718 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.718 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.719 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.719 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.719 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.719 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.719 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.719 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.720 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.720 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.720 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.720 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.720 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.721 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.721 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.721 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.721 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.721 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.722 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.722 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.722 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.722 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.722 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.723 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.723 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.723 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.723 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.723 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.724 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.724 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.724 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.724 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.724 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.725 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.725 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.725 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.725 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.725 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.726 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.726 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.726 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.726 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.726 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.726 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.727 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.727 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.727 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.727 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.727 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.728 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.728 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.728 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.728 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.728 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.729 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.730 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.730 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.730 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.730 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.730 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.731 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.731 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.731 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.731 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.731 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.732 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.732 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.732 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.732 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.732 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.733 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.733 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.733 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 sudo[185177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmopiydqhpqcuexnrfbphyafczogrrlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540125.2004914-1270-49379310468705/AnsiballZ_podman_container.py'
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.733 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.733 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.734 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.734 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.734 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.734 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.734 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.735 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.735 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.735 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.735 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.735 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.736 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.736 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.736 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.737 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.737 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.737 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.737 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.737 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.737 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.738 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.738 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 sudo[185177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.738 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.738 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.738 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.739 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.739 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.739 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.739 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.739 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.740 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.740 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.740 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.740 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.740 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.741 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.741 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.741 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.741 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.741 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.742 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.742 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.742 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.742 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.742 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.743 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.743 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.743 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.743 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.743 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.744 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.744 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.744 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.744 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.744 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.745 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.745 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.745 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.745 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.745 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.746 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.746 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.746 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.746 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.746 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.747 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.747 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.747 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.747 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.747 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.747 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.748 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.748 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.748 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.748 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.748 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.749 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.749 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.749 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.749 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.749 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.750 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.750 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.750 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.750 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.750 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.751 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.751 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.751 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.751 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.751 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.752 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.752 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.752 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.752 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.752 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.753 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.753 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.753 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.753 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.753 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.753 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.754 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.754 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.754 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.754 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.754 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.755 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.755 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.755 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.755 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.755 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.756 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.756 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.756 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.756 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.756 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.757 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.757 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.757 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.757 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.757 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.758 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.758 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.758 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.758 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.758 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.758 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.759 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.759 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.759 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.759 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.759 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.760 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.760 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.760 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.760 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.760 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.761 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.761 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.761 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.761 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.761 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.762 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.762 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.762 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.762 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.762 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.763 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.763 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.763 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.763 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.764 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.764 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.764 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.764 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.764 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.765 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.765 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.765 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.765 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.765 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.765 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.766 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.766 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.766 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.766 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.766 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.767 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.767 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.767 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.767 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.767 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.768 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.768 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.768 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.768 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.768 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.769 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.769 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.769 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.769 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.769 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.769 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.770 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.770 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.770 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.770 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.770 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.771 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.771 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.771 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.771 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.771 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.772 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.772 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.772 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.772 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.772 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.773 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.773 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.773 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.773 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.773 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.774 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.774 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.774 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.774 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.774 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.775 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.775 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.775 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.775 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.775 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.775 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.776 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.776 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.776 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.776 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.776 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.777 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.777 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.777 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.777 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.777 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.778 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.778 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.778 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.778 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.778 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.779 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.779 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.779 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.779 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.779 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.779 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.780 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.780 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.780 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.780 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.780 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.781 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.781 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.781 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.781 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.782 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.782 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.782 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.782 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.782 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.782 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.783 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.783 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.783 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.783 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.783 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.784 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.784 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.784 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.784 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.784 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.785 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.785 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.785 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.785 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.785 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.786 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.786 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.786 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.786 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.786 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.787 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.787 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.787 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.787 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.787 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.787 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.788 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.788 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.788 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.788 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.788 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.789 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.789 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.789 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.789 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.789 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.790 184565 WARNING oslo_config.cfg [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 18:55:25 compute-0 nova_compute[184561]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 18:55:25 compute-0 nova_compute[184561]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 18:55:25 compute-0 nova_compute[184561]: and ``live_migration_inbound_addr`` respectively.
Jan 27 18:55:25 compute-0 nova_compute[184561]: ).  Its value may be silently ignored in the future.
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.790 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.790 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.790 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.791 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.791 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.791 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.791 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.791 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.792 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.792 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.792 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.792 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.793 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.793 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.794 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.794 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.794 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.795 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.795 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.795 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.796 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.796 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.796 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.797 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.797 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.797 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.797 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.798 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.798 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.798 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.798 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.798 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.798 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.799 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.799 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.799 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.799 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.799 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.799 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.799 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.800 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.800 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.800 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.800 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.800 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.800 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.800 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.801 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.801 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.801 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.801 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.801 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.801 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.801 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.802 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.802 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.802 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.802 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.802 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.802 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.803 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.803 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.803 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.803 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.803 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.803 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.803 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.804 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.804 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.804 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.804 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.804 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.804 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.804 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.805 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.805 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.805 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.805 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.805 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.805 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.805 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.806 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.806 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.806 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.806 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.806 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.806 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.807 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.807 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.807 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.807 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.807 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.807 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.807 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.807 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.808 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.808 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.808 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.808 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.808 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.808 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.809 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.809 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.809 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.809 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.809 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.809 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.809 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.809 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.810 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.810 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.810 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.810 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.810 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.811 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.811 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.811 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.811 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.811 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.811 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.811 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.811 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.812 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.812 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.812 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.812 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.812 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.812 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.812 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.813 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.813 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.813 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.813 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.813 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.813 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.813 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.814 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.814 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.814 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.814 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.814 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.814 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.815 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.815 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.815 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.815 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.815 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.815 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.815 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.815 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.816 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.816 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.816 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.816 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.816 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.816 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.816 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.817 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.817 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.817 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.817 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.818 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.818 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.818 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.818 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.818 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.819 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.819 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.819 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.819 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.819 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.819 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.820 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.820 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.820 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.820 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.820 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.820 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.821 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.821 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.821 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.821 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.821 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.821 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.822 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.822 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.822 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.822 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.822 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.822 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.822 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.822 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.823 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.823 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.823 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.823 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.823 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.824 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.824 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.824 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.824 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.824 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.825 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.825 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.825 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.825 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.825 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.825 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.826 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.826 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.826 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.826 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.826 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.826 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.827 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.827 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.827 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.827 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.827 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.828 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.828 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.828 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.828 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.828 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.828 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.829 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.829 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.829 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.829 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.829 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.829 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.829 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.830 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.830 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.830 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.830 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.830 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.830 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.830 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.831 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.831 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.831 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.831 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.831 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.832 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.832 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.832 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.832 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.832 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.833 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.833 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.833 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.833 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.833 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.834 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.834 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.834 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.834 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.834 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.835 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.835 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.835 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.835 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.835 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.835 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.835 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.836 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.836 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.836 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.836 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.836 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.837 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.837 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.837 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.837 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.837 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.837 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.838 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.838 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.838 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.838 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.838 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.838 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.838 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.839 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.839 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.839 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.839 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.839 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.839 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.839 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.840 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.840 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.840 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.840 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.840 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.840 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.840 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.841 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.841 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.841 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.841 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.841 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.841 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.841 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.842 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.842 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.842 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.842 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.842 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.842 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.842 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.843 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.843 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.843 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.843 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.843 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.843 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.843 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.844 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.844 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.844 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.844 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.844 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.844 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.844 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.844 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.845 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.845 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.845 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.845 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.845 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.845 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.846 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.846 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.846 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.846 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.846 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.846 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.847 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.847 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.847 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.847 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.847 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.847 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.847 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.848 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.848 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.848 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.848 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.848 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.848 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.848 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.848 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.849 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.849 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.849 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.849 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.849 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.849 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.849 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.850 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.850 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.850 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.850 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.850 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.850 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.850 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.850 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.851 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.851 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.851 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.851 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.851 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.851 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.851 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.852 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.852 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.852 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.852 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.852 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.852 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.852 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.853 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.853 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.853 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.853 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.853 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.853 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.854 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.854 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.854 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.854 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.854 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.854 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.854 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.855 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.855 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.855 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.855 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.855 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.855 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.855 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.856 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.856 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.856 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.856 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.856 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.856 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.856 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.857 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.857 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.857 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.857 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.857 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.857 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.857 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.858 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.858 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.858 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.858 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.858 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.858 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.858 184565 DEBUG oslo_service.service [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.859 184565 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.874 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.875 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.875 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.875 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 27 18:55:25 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 18:55:25 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.944 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f382417dac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.947 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f382417dac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.947 184565 INFO nova.virt.libvirt.driver [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Connection event '1' reason 'None'
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.973 184565 WARNING nova.virt.libvirt.driver [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 18:55:25 compute-0 nova_compute[184561]: 2026-01-27 18:55:25.973 184565 DEBUG nova.virt.libvirt.volume.mount [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 27 18:55:26 compute-0 python3.9[185179]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 18:55:26 compute-0 sudo[185177]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:26 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 18:55:26 compute-0 sudo[185411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giawneztekisstdnyktahvfdhtywwjsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540126.3149931-1278-275481629064846/AnsiballZ_systemd.py'
Jan 27 18:55:26 compute-0 sudo[185411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:26 compute-0 nova_compute[184561]: 2026-01-27 18:55:26.791 184565 INFO nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Libvirt host capabilities <capabilities>
Jan 27 18:55:26 compute-0 nova_compute[184561]: 
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <host>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <uuid>8c3eef75-502c-49c8-ac4b-40d0b3f964e2</uuid>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <cpu>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <arch>x86_64</arch>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model>EPYC-Rome-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <vendor>AMD</vendor>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <microcode version='16777317'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <signature family='23' model='49' stepping='0'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='x2apic'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='tsc-deadline'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='osxsave'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='hypervisor'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='tsc_adjust'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='spec-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='stibp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='arch-capabilities'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='ssbd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='cmp_legacy'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='topoext'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='virt-ssbd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='lbrv'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='tsc-scale'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='vmcb-clean'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='pause-filter'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='pfthreshold'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='svme-addr-chk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='rdctl-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='skip-l1dfl-vmentry'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='mds-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature name='pschange-mc-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <pages unit='KiB' size='4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <pages unit='KiB' size='2048'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <pages unit='KiB' size='1048576'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </cpu>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <power_management>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <suspend_mem/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <suspend_disk/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <suspend_hybrid/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </power_management>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <iommu support='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <migration_features>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <live/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <uri_transports>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <uri_transport>tcp</uri_transport>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <uri_transport>rdma</uri_transport>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </uri_transports>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </migration_features>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <topology>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <cells num='1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <cell id='0'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:           <memory unit='KiB'>7864304</memory>
Jan 27 18:55:26 compute-0 nova_compute[184561]:           <pages unit='KiB' size='4'>1966076</pages>
Jan 27 18:55:26 compute-0 nova_compute[184561]:           <pages unit='KiB' size='2048'>0</pages>
Jan 27 18:55:26 compute-0 nova_compute[184561]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 27 18:55:26 compute-0 nova_compute[184561]:           <distances>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <sibling id='0' value='10'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:           </distances>
Jan 27 18:55:26 compute-0 nova_compute[184561]:           <cpus num='8'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:           </cpus>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         </cell>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </cells>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </topology>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <cache>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </cache>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <secmodel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model>selinux</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <doi>0</doi>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </secmodel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <secmodel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model>dac</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <doi>0</doi>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </secmodel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </host>
Jan 27 18:55:26 compute-0 nova_compute[184561]: 
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <guest>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <os_type>hvm</os_type>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <arch name='i686'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <wordsize>32</wordsize>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <domain type='qemu'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <domain type='kvm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </arch>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <features>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <pae/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <nonpae/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <acpi default='on' toggle='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <apic default='on' toggle='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <cpuselection/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <deviceboot/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <disksnapshot default='on' toggle='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <externalSnapshot/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </features>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </guest>
Jan 27 18:55:26 compute-0 nova_compute[184561]: 
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <guest>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <os_type>hvm</os_type>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <arch name='x86_64'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <wordsize>64</wordsize>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <domain type='qemu'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <domain type='kvm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </arch>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <features>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <acpi default='on' toggle='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <apic default='on' toggle='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <cpuselection/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <deviceboot/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <disksnapshot default='on' toggle='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <externalSnapshot/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </features>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </guest>
Jan 27 18:55:26 compute-0 nova_compute[184561]: 
Jan 27 18:55:26 compute-0 nova_compute[184561]: </capabilities>
Jan 27 18:55:26 compute-0 nova_compute[184561]: 
Jan 27 18:55:26 compute-0 nova_compute[184561]: 2026-01-27 18:55:26.804 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 18:55:26 compute-0 nova_compute[184561]: 2026-01-27 18:55:26.830 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 27 18:55:26 compute-0 nova_compute[184561]: <domainCapabilities>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <domain>kvm</domain>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <arch>i686</arch>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <vcpu max='240'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <iothreads supported='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <os supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <enum name='firmware'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <loader supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>rom</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>pflash</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='readonly'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>yes</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>no</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='secure'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>no</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </loader>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </os>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <cpu>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <mode name='host-passthrough' supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='hostPassthroughMigratable'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>on</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>off</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <mode name='maximum' supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='maximumMigratable'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>on</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>off</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <mode name='host-model' supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <vendor>AMD</vendor>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='x2apic'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='hypervisor'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='stibp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='ssbd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='overflow-recov'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='succor'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='lbrv'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc-scale'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='flushbyasid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='pause-filter'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='pfthreshold'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='disable' name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <mode name='custom' supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-noTSX'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='ClearwaterForest'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ddpd-u'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sha512'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sm3'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sm4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='ClearwaterForest-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ddpd-u'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sha512'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sm3'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sm4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cooperlake'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cooperlake-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cooperlake-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Denverton'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Denverton-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Denverton-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Denverton-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Dhyana-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Turin'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbpb'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Turin-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbpb'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-v5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-128'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-256'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-512'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-128'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-256'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-512'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-noTSX'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v6'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v7'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='IvyBridge'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='KnightsMill'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512er'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512pf'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='KnightsMill-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512er'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512pf'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Opteron_G4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Opteron_G4-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Opteron_G5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tbm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Opteron_G5-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tbm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SierraForest'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Snowridge'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='athlon'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='athlon-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='core2duo'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='core2duo-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='coreduo'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='coreduo-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='n270'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='n270-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='phenom'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='phenom-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </cpu>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <memoryBacking supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <enum name='sourceType'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <value>file</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <value>anonymous</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <value>memfd</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </memoryBacking>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <devices>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <disk supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='diskDevice'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>disk</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>cdrom</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>floppy</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>lun</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='bus'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>ide</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>fdc</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>scsi</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>sata</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtio-transitional</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtio-non-transitional</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </disk>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <graphics supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>vnc</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>egl-headless</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>dbus</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </graphics>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <video supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='modelType'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>vga</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>cirrus</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>none</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>bochs</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>ramfb</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </video>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <hostdev supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='mode'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>subsystem</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='startupPolicy'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>default</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>mandatory</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>requisite</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>optional</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='subsysType'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>pci</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>scsi</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='capsType'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='pciBackend'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </hostdev>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <rng supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtio-transitional</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtio-non-transitional</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>random</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>egd</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>builtin</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </rng>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <filesystem supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='driverType'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>path</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>handle</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>virtiofs</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </filesystem>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <tpm supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>tpm-tis</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>tpm-crb</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>emulator</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>external</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='backendVersion'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>2.0</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </tpm>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <redirdev supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='bus'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </redirdev>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <channel supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>pty</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>unix</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </channel>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <crypto supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='model'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>qemu</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>builtin</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </crypto>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <interface supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='backendType'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>default</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>passt</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </interface>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <panic supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>isa</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>hyperv</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </panic>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <console supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>null</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>vc</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>pty</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>dev</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>file</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>pipe</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>stdio</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>udp</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>tcp</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>unix</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>qemu-vdagent</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>dbus</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </console>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </devices>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <features>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <gic supported='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <vmcoreinfo supported='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <genid supported='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <backingStoreInput supported='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <backup supported='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <async-teardown supported='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <s390-pv supported='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <ps2 supported='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <tdx supported='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <sev supported='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <sgx supported='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <hyperv supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='features'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>relaxed</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>vapic</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>spinlocks</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>vpindex</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>runtime</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>synic</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>stimer</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>reset</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>vendor_id</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>frequencies</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>reenlightenment</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>tlbflush</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>ipi</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>avic</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>emsr_bitmap</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>xmm_input</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <defaults>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <spinlocks>4095</spinlocks>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <stimer_direct>on</stimer_direct>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </defaults>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </hyperv>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <launchSecurity supported='no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </features>
Jan 27 18:55:26 compute-0 nova_compute[184561]: </domainCapabilities>
Jan 27 18:55:26 compute-0 nova_compute[184561]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 18:55:26 compute-0 nova_compute[184561]: 2026-01-27 18:55:26.842 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 27 18:55:26 compute-0 nova_compute[184561]: <domainCapabilities>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <domain>kvm</domain>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <arch>i686</arch>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <vcpu max='4096'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <iothreads supported='yes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <os supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <enum name='firmware'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <loader supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>rom</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>pflash</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='readonly'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>yes</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>no</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='secure'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>no</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </loader>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   </os>
Jan 27 18:55:26 compute-0 nova_compute[184561]:   <cpu>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <mode name='host-passthrough' supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='hostPassthroughMigratable'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>on</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>off</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <mode name='maximum' supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <enum name='maximumMigratable'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>on</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <value>off</value>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <mode name='host-model' supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <vendor>AMD</vendor>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='x2apic'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='hypervisor'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='stibp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='ssbd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='overflow-recov'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='succor'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='lbrv'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc-scale'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='flushbyasid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='pause-filter'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='pfthreshold'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <feature policy='disable' name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:26 compute-0 nova_compute[184561]:     <mode name='custom' supported='yes'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-noTSX'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='ClearwaterForest'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ddpd-u'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sha512'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sm3'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sm4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='ClearwaterForest-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bhi-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ddpd-u'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sha512'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sm3'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sm4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cooperlake'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cooperlake-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Cooperlake-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Denverton'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Denverton-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Denverton-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Denverton-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Dhyana-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Turin'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbpb'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-Turin-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbpb'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='EPYC-v5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-128'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-256'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-512'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-128'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-256'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx10-512'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 python3.9[185413]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-noTSX'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Haswell-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v6'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v7'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='IvyBridge'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-IBRS'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='KnightsMill'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512er'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512pf'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='KnightsMill-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512er'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512pf'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Opteron_G4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Opteron_G4-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Opteron_G5'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tbm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='Opteron_G5-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tbm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v1'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v2'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v3'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v4'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 18:55:26 compute-0 nova_compute[184561]:       <blockers model='SierraForest'>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:26 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='athlon'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='athlon-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='core2duo'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='core2duo-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='coreduo'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='coreduo-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='n270'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='n270-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='phenom'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='phenom-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </cpu>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <memoryBacking supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <enum name='sourceType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>file</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>anonymous</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>memfd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </memoryBacking>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <devices>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <disk supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='diskDevice'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>disk</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>cdrom</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>floppy</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>lun</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='bus'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>fdc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>scsi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>sata</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-non-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </disk>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <graphics supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vnc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>egl-headless</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dbus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </graphics>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <video supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='modelType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vga</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>cirrus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>none</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>bochs</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>ramfb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </video>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <hostdev supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='mode'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>subsystem</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='startupPolicy'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>default</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>mandatory</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>requisite</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>optional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='subsysType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pci</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>scsi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='capsType'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='pciBackend'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </hostdev>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <rng supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-non-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>random</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>egd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>builtin</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </rng>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <filesystem supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='driverType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>path</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>handle</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtiofs</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </filesystem>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <tpm supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tpm-tis</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tpm-crb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>emulator</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>external</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendVersion'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>2.0</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </tpm>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <redirdev supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='bus'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </redirdev>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <channel supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pty</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>unix</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </channel>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <crypto supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>qemu</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>builtin</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </crypto>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <interface supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>default</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>passt</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </interface>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <panic supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>isa</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>hyperv</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </panic>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <console supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>null</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pty</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dev</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>file</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pipe</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>stdio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>udp</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tcp</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>unix</value>
Jan 27 18:55:27 compute-0 systemd[1]: Stopping nova_compute container...
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>qemu-vdagent</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dbus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </console>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </devices>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <features>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <gic supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <vmcoreinfo supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <genid supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <backingStoreInput supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <backup supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <async-teardown supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <s390-pv supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <ps2 supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <tdx supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <sev supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <sgx supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <hyperv supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='features'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>relaxed</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vapic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>spinlocks</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vpindex</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>runtime</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>synic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>stimer</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>reset</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vendor_id</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>frequencies</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>reenlightenment</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tlbflush</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>ipi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>avic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>emsr_bitmap</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>xmm_input</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <defaults>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <spinlocks>4095</spinlocks>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <stimer_direct>on</stimer_direct>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </defaults>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </hyperv>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <launchSecurity supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </features>
Jan 27 18:55:27 compute-0 nova_compute[184561]: </domainCapabilities>
Jan 27 18:55:27 compute-0 nova_compute[184561]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:26.919 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:26.973 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 27 18:55:27 compute-0 nova_compute[184561]: <domainCapabilities>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <domain>kvm</domain>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <arch>x86_64</arch>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <vcpu max='240'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <iothreads supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <os supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <enum name='firmware'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <loader supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>rom</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pflash</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='readonly'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>yes</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>no</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='secure'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>no</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </loader>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </os>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <cpu>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <mode name='host-passthrough' supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='hostPassthroughMigratable'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>on</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>off</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <mode name='maximum' supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='maximumMigratable'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>on</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>off</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <mode name='host-model' supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <vendor>AMD</vendor>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='x2apic'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='hypervisor'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='stibp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='ssbd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='overflow-recov'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='succor'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='lbrv'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc-scale'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='flushbyasid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='pause-filter'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='pfthreshold'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='disable' name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <mode name='custom' supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-noTSX'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='ClearwaterForest'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ddpd-u'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sha512'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sm3'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sm4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='ClearwaterForest-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ddpd-u'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sha512'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sm3'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sm4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cooperlake'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cooperlake-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cooperlake-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Denverton'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Denverton-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Denverton-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Denverton-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Dhyana-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Turin'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbpb'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Turin-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbpb'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-128'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-256'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-512'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-128'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-256'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-512'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-noTSX'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v6'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v7'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='IvyBridge'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='KnightsMill'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512er'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512pf'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='KnightsMill-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512er'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512pf'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Opteron_G4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Opteron_G4-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Opteron_G5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tbm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Opteron_G5-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tbm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='athlon'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='athlon-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='core2duo'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='core2duo-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='coreduo'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='coreduo-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='n270'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='n270-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='phenom'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='phenom-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </cpu>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <memoryBacking supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <enum name='sourceType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>file</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>anonymous</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>memfd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </memoryBacking>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <devices>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <disk supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='diskDevice'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>disk</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>cdrom</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>floppy</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>lun</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='bus'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>ide</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>fdc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>scsi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>sata</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-non-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </disk>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <graphics supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vnc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>egl-headless</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dbus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </graphics>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <video supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='modelType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vga</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>cirrus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>none</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>bochs</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>ramfb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </video>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <hostdev supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='mode'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>subsystem</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='startupPolicy'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>default</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>mandatory</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>requisite</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>optional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='subsysType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pci</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>scsi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='capsType'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='pciBackend'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </hostdev>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <rng supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-non-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>random</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>egd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>builtin</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </rng>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <filesystem supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='driverType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>path</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>handle</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtiofs</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </filesystem>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <tpm supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tpm-tis</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tpm-crb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>emulator</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>external</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendVersion'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>2.0</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </tpm>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <redirdev supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='bus'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </redirdev>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <channel supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pty</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>unix</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </channel>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <crypto supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>qemu</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>builtin</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </crypto>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <interface supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>default</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>passt</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </interface>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <panic supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>isa</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>hyperv</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </panic>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <console supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>null</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pty</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dev</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>file</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pipe</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>stdio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>udp</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tcp</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>unix</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>qemu-vdagent</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dbus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </console>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </devices>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <features>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <gic supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <vmcoreinfo supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <genid supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <backingStoreInput supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <backup supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <async-teardown supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <s390-pv supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <ps2 supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <tdx supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <sev supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <sgx supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <hyperv supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='features'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>relaxed</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vapic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>spinlocks</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vpindex</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>runtime</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>synic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>stimer</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>reset</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vendor_id</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>frequencies</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>reenlightenment</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tlbflush</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>ipi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>avic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>emsr_bitmap</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>xmm_input</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <defaults>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <spinlocks>4095</spinlocks>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <stimer_direct>on</stimer_direct>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </defaults>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </hyperv>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <launchSecurity supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </features>
Jan 27 18:55:27 compute-0 nova_compute[184561]: </domainCapabilities>
Jan 27 18:55:27 compute-0 nova_compute[184561]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:27.035 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 27 18:55:27 compute-0 nova_compute[184561]: <domainCapabilities>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <domain>kvm</domain>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <arch>x86_64</arch>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <vcpu max='4096'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <iothreads supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <os supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <enum name='firmware'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>efi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <loader supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>rom</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pflash</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='readonly'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>yes</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>no</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='secure'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>yes</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>no</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </loader>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </os>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <cpu>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <mode name='host-passthrough' supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='hostPassthroughMigratable'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>on</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>off</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <mode name='maximum' supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='maximumMigratable'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>on</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>off</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <mode name='host-model' supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <vendor>AMD</vendor>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='x2apic'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='hypervisor'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='stibp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='ssbd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='overflow-recov'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='succor'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='lbrv'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='tsc-scale'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='flushbyasid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='pause-filter'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='pfthreshold'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <feature policy='disable' name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <mode name='custom' supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-noTSX'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Broadwell-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='ClearwaterForest'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ddpd-u'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sha512'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sm3'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sm4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='ClearwaterForest-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ddpd-u'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sha512'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sm3'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sm4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cooperlake'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cooperlake-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Cooperlake-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Denverton'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Denverton-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Denverton-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Denverton-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Dhyana-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Milan-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Rome-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Turin'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbpb'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-Turin-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amd-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='auto-ibrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='perfmon-v2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbpb'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='stibp-always-on'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='EPYC-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-128'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-256'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-512'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='GraniteRapids-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-128'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-256'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx10-512'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='prefetchiti'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-noTSX'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Haswell-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v6'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Icelake-Server-v7'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='IvyBridge'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='IvyBridge-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='KnightsMill'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512er'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512pf'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='KnightsMill-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512er'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512pf'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Opteron_G4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Opteron_G4-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Opteron_G5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tbm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Opteron_G5-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fma4'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tbm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xop'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SapphireRapids-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='amx-tile'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-bf16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-fp16'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bitalg'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrc'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fzrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='la57'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='taa-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='SierraForest-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ifma'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cmpccxadd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fbsdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='fsrs'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ibrs-all'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='intel-psfd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='lam'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mcdt-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pbrsb-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='psdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='serialize'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vaes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Client-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='hle'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='rtm'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Skylake-Server-v5'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512bw'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512cd'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512dq'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512f'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='avx512vl'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='invpcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pcid'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='pku'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='mpx'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v2'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v3'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='core-capability'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='split-lock-detect'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='Snowridge-v4'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='cldemote'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='erms'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='gfni'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdir64b'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='movdiri'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='xsaves'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='athlon'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='athlon-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='core2duo'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='core2duo-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='coreduo'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='coreduo-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='n270'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='n270-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='ss'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='phenom'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <blockers model='phenom-v1'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnow'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <feature name='3dnowext'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </blockers>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </mode>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </cpu>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <memoryBacking supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <enum name='sourceType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>file</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>anonymous</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <value>memfd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </memoryBacking>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <devices>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <disk supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='diskDevice'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>disk</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>cdrom</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>floppy</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>lun</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='bus'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>fdc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>scsi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>sata</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-non-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </disk>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <graphics supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vnc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>egl-headless</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dbus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </graphics>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <video supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='modelType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vga</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>cirrus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>none</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>bochs</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>ramfb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </video>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <hostdev supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='mode'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>subsystem</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='startupPolicy'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>default</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>mandatory</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>requisite</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>optional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='subsysType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pci</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>scsi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='capsType'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='pciBackend'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </hostdev>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <rng supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtio-non-transitional</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>random</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>egd</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>builtin</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </rng>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <filesystem supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='driverType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>path</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>handle</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>virtiofs</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </filesystem>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <tpm supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tpm-tis</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tpm-crb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>emulator</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>external</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendVersion'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>2.0</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </tpm>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <redirdev supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='bus'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>usb</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </redirdev>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <channel supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pty</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>unix</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </channel>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <crypto supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>qemu</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendModel'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>builtin</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </crypto>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <interface supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='backendType'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>default</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>passt</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </interface>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <panic supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='model'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>isa</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>hyperv</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </panic>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <console supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='type'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>null</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vc</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pty</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dev</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>file</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>pipe</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>stdio</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>udp</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tcp</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>unix</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>qemu-vdagent</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>dbus</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </console>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </devices>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   <features>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <gic supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <vmcoreinfo supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <genid supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <backingStoreInput supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <backup supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <async-teardown supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <s390-pv supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <ps2 supported='yes'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <tdx supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <sev supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <sgx supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <hyperv supported='yes'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <enum name='features'>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>relaxed</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vapic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>spinlocks</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vpindex</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>runtime</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>synic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>stimer</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>reset</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>vendor_id</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>frequencies</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>reenlightenment</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>tlbflush</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>ipi</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>avic</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>emsr_bitmap</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <value>xmm_input</value>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </enum>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       <defaults>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <spinlocks>4095</spinlocks>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <stimer_direct>on</stimer_direct>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 18:55:27 compute-0 nova_compute[184561]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 18:55:27 compute-0 nova_compute[184561]:       </defaults>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     </hyperv>
Jan 27 18:55:27 compute-0 nova_compute[184561]:     <launchSecurity supported='no'/>
Jan 27 18:55:27 compute-0 nova_compute[184561]:   </features>
Jan 27 18:55:27 compute-0 nova_compute[184561]: </domainCapabilities>
Jan 27 18:55:27 compute-0 nova_compute[184561]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:27.111 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:27.112 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:27.112 184565 DEBUG nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:27.117 184565 INFO nova.virt.libvirt.host [None req-d872f993-5472-4e1b-aea7-c360abcd18b8 - - - - - -] Secure Boot support detected
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:27.118 184565 DEBUG oslo_concurrency.lockutils [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:27.118 184565 DEBUG oslo_concurrency.lockutils [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 18:55:27 compute-0 nova_compute[184561]: 2026-01-27 18:55:27.118 184565 DEBUG oslo_concurrency.lockutils [None req-a212448c-372d-4bd6-8eb5-2decee5dda90 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 18:55:27 compute-0 virtqemud[185201]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 27 18:55:27 compute-0 virtqemud[185201]: hostname: compute-0
Jan 27 18:55:27 compute-0 virtqemud[185201]: End of file while reading data: Input/output error
Jan 27 18:55:27 compute-0 systemd[1]: libpod-89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950.scope: Deactivated successfully.
Jan 27 18:55:27 compute-0 systemd[1]: libpod-89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950.scope: Consumed 3.257s CPU time.
Jan 27 18:55:27 compute-0 podman[185421]: 2026-01-27 18:55:27.586429934 +0000 UTC m=+0.546246947 container died 89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Jan 27 18:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950-userdata-shm.mount: Deactivated successfully.
Jan 27 18:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d-merged.mount: Deactivated successfully.
Jan 27 18:55:27 compute-0 podman[185421]: 2026-01-27 18:55:27.696528623 +0000 UTC m=+0.656345606 container cleanup 89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 27 18:55:27 compute-0 podman[185421]: nova_compute
Jan 27 18:55:27 compute-0 podman[185452]: nova_compute
Jan 27 18:55:27 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 27 18:55:27 compute-0 systemd[1]: Stopped nova_compute container.
Jan 27 18:55:27 compute-0 systemd[1]: Starting nova_compute container...
Jan 27 18:55:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3ff0d6842a9c71bf3709bea9a02f4ba9f5d2018f441fb57f2f1011147adc5d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:27 compute-0 podman[185465]: 2026-01-27 18:55:27.955191847 +0000 UTC m=+0.157243103 container init 89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 18:55:27 compute-0 podman[185465]: 2026-01-27 18:55:27.961930853 +0000 UTC m=+0.163982089 container start 89a631991284c1dc40279d108e706116b6210a3385815ea91a1fcdf5db089950 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:55:27 compute-0 nova_compute[185480]: + sudo -E kolla_set_configs
Jan 27 18:55:28 compute-0 podman[185465]: nova_compute
Jan 27 18:55:28 compute-0 systemd[1]: Started nova_compute container.
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Validating config file
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying service configuration files
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /etc/ceph
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Creating directory /etc/ceph
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Writing out command to execute
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 18:55:28 compute-0 nova_compute[185480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 18:55:28 compute-0 sudo[185411]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:28 compute-0 nova_compute[185480]: ++ cat /run_command
Jan 27 18:55:28 compute-0 nova_compute[185480]: + CMD=nova-compute
Jan 27 18:55:28 compute-0 nova_compute[185480]: + ARGS=
Jan 27 18:55:28 compute-0 nova_compute[185480]: + sudo kolla_copy_cacerts
Jan 27 18:55:28 compute-0 nova_compute[185480]: + [[ ! -n '' ]]
Jan 27 18:55:28 compute-0 nova_compute[185480]: + . kolla_extend_start
Jan 27 18:55:28 compute-0 nova_compute[185480]: Running command: 'nova-compute'
Jan 27 18:55:28 compute-0 nova_compute[185480]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 18:55:28 compute-0 nova_compute[185480]: + umask 0022
Jan 27 18:55:28 compute-0 nova_compute[185480]: + exec nova-compute
Jan 27 18:55:28 compute-0 sudo[185641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpntxclavfloqbaugpckpaethtmcxtkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540128.2831972-1287-76747596460568/AnsiballZ_podman_container.py'
Jan 27 18:55:28 compute-0 sudo[185641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:28 compute-0 python3.9[185643]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 18:55:29 compute-0 systemd[1]: Started libpod-conmon-5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce.scope.
Jan 27 18:55:29 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449ed574dcf4c66e50a8b16a6a73c02392d7ffa1b40fe3e1c82037f4f739697c/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449ed574dcf4c66e50a8b16a6a73c02392d7ffa1b40fe3e1c82037f4f739697c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449ed574dcf4c66e50a8b16a6a73c02392d7ffa1b40fe3e1c82037f4f739697c/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 27 18:55:29 compute-0 podman[185668]: 2026-01-27 18:55:29.088711069 +0000 UTC m=+0.148460744 container init 5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:55:29 compute-0 podman[185668]: 2026-01-27 18:55:29.099134759 +0000 UTC m=+0.158884424 container start 5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Jan 27 18:55:29 compute-0 python3.9[185643]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Applying nova statedir ownership
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 27 18:55:29 compute-0 nova_compute_init[185689]: INFO:nova_statedir:Nova statedir ownership complete
Jan 27 18:55:29 compute-0 systemd[1]: libpod-5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce.scope: Deactivated successfully.
Jan 27 18:55:29 compute-0 podman[185703]: 2026-01-27 18:55:29.193481945 +0000 UTC m=+0.023292411 container died 5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce-userdata-shm.mount: Deactivated successfully.
Jan 27 18:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-449ed574dcf4c66e50a8b16a6a73c02392d7ffa1b40fe3e1c82037f4f739697c-merged.mount: Deactivated successfully.
Jan 27 18:55:29 compute-0 podman[185703]: 2026-01-27 18:55:29.237691445 +0000 UTC m=+0.067501891 container cleanup 5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute_init, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 27 18:55:29 compute-0 systemd[1]: libpod-conmon-5200f212accb2a34eae6559f15acfcbba510057f85566bea26289d96d98be3ce.scope: Deactivated successfully.
Jan 27 18:55:29 compute-0 sudo[185641]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:29 compute-0 sshd-session[162448]: Connection closed by 192.168.122.31 port 57630
Jan 27 18:55:29 compute-0 sshd-session[162445]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:55:29 compute-0 systemd-logind[795]: Session 23 logged out. Waiting for processes to exit.
Jan 27 18:55:29 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 27 18:55:29 compute-0 systemd[1]: session-23.scope: Consumed 1min 42.328s CPU time.
Jan 27 18:55:29 compute-0 systemd-logind[795]: Removed session 23.
Jan 27 18:55:30 compute-0 nova_compute[185480]: 2026-01-27 18:55:30.242 185484 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 18:55:30 compute-0 nova_compute[185480]: 2026-01-27 18:55:30.243 185484 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 18:55:30 compute-0 nova_compute[185480]: 2026-01-27 18:55:30.243 185484 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 18:55:30 compute-0 nova_compute[185480]: 2026-01-27 18:55:30.243 185484 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 27 18:55:30 compute-0 nova_compute[185480]: 2026-01-27 18:55:30.480 185484 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 18:55:30 compute-0 nova_compute[185480]: 2026-01-27 18:55:30.503 185484 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 18:55:30 compute-0 nova_compute[185480]: 2026-01-27 18:55:30.503 185484 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 27 18:55:30 compute-0 nova_compute[185480]: 2026-01-27 18:55:30.938 185484 INFO nova.virt.driver [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.052 185484 INFO nova.compute.provider_config [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.067 185484 DEBUG oslo_concurrency.lockutils [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.067 185484 DEBUG oslo_concurrency.lockutils [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.067 185484 DEBUG oslo_concurrency.lockutils [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.068 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.068 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.068 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.068 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.068 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.068 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.069 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.069 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.069 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.069 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.069 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.069 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.069 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.070 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.070 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.070 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.070 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.070 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.070 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.071 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.071 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.071 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.071 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.071 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.071 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.071 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.072 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.072 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.072 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.072 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.072 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.072 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.073 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.073 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.073 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.073 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.073 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.073 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.074 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.074 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.074 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.074 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.074 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.074 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.075 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.075 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.075 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.075 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.075 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.075 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.075 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.076 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.076 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.076 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.076 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.076 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.076 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.076 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.077 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.077 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.077 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.077 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.077 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.077 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.077 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.078 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.078 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.078 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.078 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.078 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.078 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.078 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.079 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.079 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.079 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.079 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.079 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.079 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.079 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.080 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.080 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.080 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.080 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.080 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.080 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.080 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.081 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.081 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.081 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.081 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.081 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.081 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.082 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.082 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.082 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.082 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.082 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.082 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.082 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.083 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.083 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.083 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.083 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.083 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.083 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.083 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.083 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.084 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.084 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.084 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.084 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.084 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.084 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.084 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.085 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.085 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.085 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.085 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.085 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.085 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.085 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.086 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.086 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.086 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.086 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.086 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.086 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.086 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.087 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.087 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.087 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.087 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.087 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.087 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.087 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.088 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.088 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.088 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.088 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.088 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.088 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.089 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.089 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.089 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.089 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.089 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.089 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.089 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.090 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.090 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.090 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.090 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.090 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.090 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.090 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.091 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.091 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.091 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.091 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.091 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.091 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.092 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.092 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.092 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.092 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.092 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.092 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.092 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.093 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.093 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.093 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.093 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.093 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.093 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.094 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.094 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.094 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.094 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.094 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.094 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.095 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.095 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.095 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.095 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.095 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.095 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.096 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.096 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.096 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.096 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.096 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.096 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.096 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.097 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.097 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.097 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.097 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.097 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.097 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.098 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.098 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.098 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.098 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.098 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.098 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.098 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.099 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.099 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.099 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.099 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.099 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.099 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.099 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.100 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.100 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.100 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.100 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.100 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.100 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.100 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.101 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.101 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.101 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.101 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.101 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.102 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.102 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.102 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.102 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.102 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.102 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.103 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.103 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.103 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.103 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.103 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.103 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.103 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.104 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.104 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.104 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.104 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.104 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.104 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.105 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.105 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.105 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.105 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.105 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.105 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.106 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.106 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.106 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.106 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.106 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.106 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.107 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.107 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.107 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.107 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.107 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.107 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.107 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.108 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.108 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.108 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.108 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.108 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.108 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.109 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.109 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.109 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.109 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.109 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.109 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.109 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.110 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.110 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.110 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.110 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.110 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.111 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.111 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.111 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.111 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.111 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.111 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.112 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.112 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.112 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.112 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.112 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.112 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.112 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.113 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.113 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.113 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.113 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.113 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.113 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.113 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.114 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.114 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.114 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.114 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.114 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.114 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.114 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.115 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.115 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.115 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.115 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.115 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.115 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.115 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.116 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.116 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.116 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.116 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.116 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.116 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.116 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.117 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.117 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.117 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.117 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.117 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.117 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.117 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.118 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.118 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.118 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.118 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.118 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.118 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.118 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.119 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.119 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.119 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.119 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.119 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.119 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.119 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.120 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.120 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.120 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.120 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.120 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.120 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.120 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.121 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.121 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.121 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.121 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.121 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.122 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.122 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.122 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.122 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.123 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.123 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.123 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.123 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.123 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.123 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.123 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.124 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.124 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.124 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.124 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.124 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.124 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.124 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.125 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.125 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.125 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.125 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.125 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.125 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.125 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.126 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.126 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.126 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.126 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.126 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.126 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.126 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.127 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.127 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.127 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.127 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.127 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.127 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.128 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.128 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.128 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.128 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.128 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.128 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.129 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.129 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.129 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.129 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.129 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.129 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.130 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.130 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.130 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.130 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.130 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.130 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.130 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.131 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.131 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.131 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.131 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.131 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.131 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.131 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.132 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.132 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.132 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.132 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.132 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.132 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.132 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.133 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.133 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.133 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.133 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.133 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.133 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.134 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.134 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.134 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.134 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.134 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.134 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.134 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.135 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.135 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.135 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.135 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.135 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.136 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.136 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.136 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.136 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.136 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.136 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.137 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.137 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.137 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.137 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.137 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.137 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.138 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.138 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.138 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.138 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.138 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.138 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.139 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.139 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.139 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.139 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.139 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.139 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.140 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.140 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.140 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.140 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.140 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.141 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.141 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.141 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.141 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.141 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.141 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.141 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.142 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.142 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.142 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.142 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.142 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.142 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.143 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.143 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.143 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.143 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.143 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.144 185484 WARNING oslo_config.cfg [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 18:55:31 compute-0 nova_compute[185480]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 18:55:31 compute-0 nova_compute[185480]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 18:55:31 compute-0 nova_compute[185480]: and ``live_migration_inbound_addr`` respectively.
Jan 27 18:55:31 compute-0 nova_compute[185480]: ).  Its value may be silently ignored in the future.
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.144 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.144 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.144 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.144 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.144 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.145 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.145 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.145 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.145 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.145 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.145 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.146 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.146 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.146 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.146 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.146 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.146 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.147 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.147 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.147 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.147 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.147 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.147 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.147 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.148 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.148 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.148 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.148 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.148 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.148 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.148 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.149 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.149 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.149 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.149 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.149 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.150 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.150 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.150 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.150 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.150 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.150 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.150 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.151 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.151 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.151 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.151 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.151 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.151 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.152 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.152 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.152 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.152 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.152 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.152 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.153 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.153 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.153 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.153 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.153 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.153 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.154 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.154 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.154 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.154 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.154 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.154 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.154 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.155 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.155 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.155 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.155 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.155 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.155 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.155 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.156 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.156 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.156 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.156 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.156 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.156 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.156 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.157 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.157 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.157 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.157 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.157 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.157 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.158 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.158 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.158 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.158 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.158 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.158 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.158 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.159 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.159 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.159 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.159 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.159 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.159 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.159 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.160 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.160 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.160 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.160 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.160 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.160 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.160 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.161 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.161 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.161 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.161 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.161 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.161 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.162 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.162 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.162 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.162 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.162 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.162 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.163 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.163 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.163 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.163 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.163 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.163 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.164 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.164 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.164 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.164 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.164 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.164 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.164 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.165 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.165 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.165 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.165 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.165 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.166 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.166 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.166 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.166 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.166 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.166 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.167 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.167 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.167 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.167 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.167 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.167 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.167 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.168 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.168 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.168 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.168 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.168 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.168 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.168 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.169 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.169 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.169 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.169 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.169 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.169 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.169 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.170 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.170 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.170 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.170 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.170 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.170 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.171 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.171 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.171 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.171 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.171 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.171 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.171 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.172 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.172 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.172 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.172 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.172 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.172 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.172 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.173 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.173 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.173 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.173 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.173 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.173 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.173 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.174 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.174 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.174 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.174 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.174 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.174 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.175 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.175 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.175 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.175 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.175 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.175 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.175 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.176 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.176 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.176 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.176 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.176 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.176 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.177 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.177 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.177 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.177 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.177 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.177 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.178 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.178 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.178 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.178 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.178 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.178 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.178 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.179 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.179 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.179 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.179 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.179 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.179 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.179 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.180 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.180 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.180 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.180 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.180 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.180 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.180 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.181 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.181 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.181 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.181 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.181 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.181 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.181 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.182 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.182 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.182 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.182 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.182 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.183 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.183 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.183 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.183 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.183 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.183 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.183 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.184 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.184 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.184 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.184 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.184 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.185 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.185 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.185 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.185 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.185 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.185 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.185 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.186 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.186 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.186 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.186 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.186 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.186 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.186 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.187 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.187 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.187 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.187 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.187 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.187 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.188 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.188 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.188 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.188 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.188 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.188 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.189 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.189 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.189 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.189 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.189 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.190 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.190 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.190 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.190 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.190 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.190 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.191 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.191 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.191 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.191 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.191 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.191 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.191 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.192 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.192 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.192 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.192 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.192 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.192 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.193 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.193 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.193 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.193 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.193 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.193 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.193 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.194 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.194 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.194 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.194 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.194 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.194 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.194 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.195 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.195 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.195 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.195 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.195 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.195 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.195 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.196 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.196 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.196 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.196 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.196 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.196 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.196 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.197 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.197 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.197 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.197 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.197 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.197 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.198 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.198 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.198 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.198 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.198 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.199 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.199 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.199 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.199 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.199 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.199 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.200 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.200 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.200 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.200 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.200 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.200 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.200 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.201 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.201 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.201 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.201 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.201 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.202 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.202 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.202 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.202 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.202 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.202 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.202 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.203 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.203 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.203 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.203 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.203 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.203 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.203 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.204 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.204 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.204 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.204 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.204 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.204 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.204 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.205 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.205 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.205 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.205 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.205 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.206 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.206 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.206 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.206 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.207 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.207 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.207 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.207 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.207 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.208 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.208 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.208 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.208 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.208 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.209 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.209 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.209 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.209 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.210 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.210 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.210 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.210 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.211 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.211 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.211 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.211 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.212 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.212 185484 DEBUG oslo_service.service [None req-70bc2c39-dff0-4bcc-9ba5-00ab7eee6b06 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.214 185484 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.254 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.255 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.255 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.255 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.270 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f25d6fdb550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.273 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f25d6fdb550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.274 185484 INFO nova.virt.libvirt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Connection event '1' reason 'None'
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.282 185484 INFO nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Libvirt host capabilities <capabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]: 
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <host>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <uuid>8c3eef75-502c-49c8-ac4b-40d0b3f964e2</uuid>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <arch>x86_64</arch>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model>EPYC-Rome-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <vendor>AMD</vendor>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <microcode version='16777317'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <signature family='23' model='49' stepping='0'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='x2apic'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='tsc-deadline'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='osxsave'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='hypervisor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='tsc_adjust'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='spec-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='stibp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='arch-capabilities'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='cmp_legacy'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='topoext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='virt-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='lbrv'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='tsc-scale'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='vmcb-clean'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='pause-filter'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='pfthreshold'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='svme-addr-chk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='rdctl-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='skip-l1dfl-vmentry'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='mds-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature name='pschange-mc-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <pages unit='KiB' size='4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <pages unit='KiB' size='2048'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <pages unit='KiB' size='1048576'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <power_management>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <suspend_mem/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <suspend_disk/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <suspend_hybrid/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </power_management>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <iommu support='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <migration_features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <live/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <uri_transports>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <uri_transport>tcp</uri_transport>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <uri_transport>rdma</uri_transport>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </uri_transports>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </migration_features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <topology>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <cells num='1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <cell id='0'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:           <memory unit='KiB'>7864304</memory>
Jan 27 18:55:31 compute-0 nova_compute[185480]:           <pages unit='KiB' size='4'>1966076</pages>
Jan 27 18:55:31 compute-0 nova_compute[185480]:           <pages unit='KiB' size='2048'>0</pages>
Jan 27 18:55:31 compute-0 nova_compute[185480]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 27 18:55:31 compute-0 nova_compute[185480]:           <distances>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <sibling id='0' value='10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:           </distances>
Jan 27 18:55:31 compute-0 nova_compute[185480]:           <cpus num='8'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:           </cpus>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         </cell>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </cells>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </topology>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <cache>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </cache>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <secmodel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model>selinux</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <doi>0</doi>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </secmodel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <secmodel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model>dac</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <doi>0</doi>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </secmodel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </host>
Jan 27 18:55:31 compute-0 nova_compute[185480]: 
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <guest>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <os_type>hvm</os_type>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <arch name='i686'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <wordsize>32</wordsize>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <domain type='qemu'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <domain type='kvm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </arch>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <pae/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <nonpae/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <acpi default='on' toggle='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <apic default='on' toggle='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <cpuselection/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <deviceboot/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <disksnapshot default='on' toggle='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <externalSnapshot/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </guest>
Jan 27 18:55:31 compute-0 nova_compute[185480]: 
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <guest>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <os_type>hvm</os_type>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <arch name='x86_64'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <wordsize>64</wordsize>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <domain type='qemu'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <domain type='kvm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </arch>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <acpi default='on' toggle='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <apic default='on' toggle='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <cpuselection/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <deviceboot/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <disksnapshot default='on' toggle='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <externalSnapshot/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </guest>
Jan 27 18:55:31 compute-0 nova_compute[185480]: 
Jan 27 18:55:31 compute-0 nova_compute[185480]: </capabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]: 
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.286 185484 WARNING nova.virt.libvirt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.287 185484 DEBUG nova.virt.libvirt.volume.mount [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.292 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.297 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 27 18:55:31 compute-0 nova_compute[185480]: <domainCapabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <domain>kvm</domain>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <arch>i686</arch>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <vcpu max='240'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <iothreads supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <os supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <enum name='firmware'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <loader supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>rom</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pflash</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='readonly'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>yes</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>no</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='secure'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>no</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </loader>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </os>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='host-passthrough' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='hostPassthroughMigratable'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>on</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>off</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='maximum' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='maximumMigratable'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>on</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>off</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='host-model' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <vendor>AMD</vendor>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='x2apic'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='hypervisor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='stibp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='overflow-recov'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='succor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='lbrv'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc-scale'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='flushbyasid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='pause-filter'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='pfthreshold'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='disable' name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='custom' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='ClearwaterForest'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ddpd-u'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sha512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='ClearwaterForest-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ddpd-u'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sha512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Dhyana-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Turin'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbpb'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Turin-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbpb'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-128'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-256'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-128'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-256'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v6'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v7'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='KnightsMill'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512er'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512pf'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='KnightsMill-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512er'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512pf'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G4-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tbm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G5-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tbm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='athlon'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='athlon-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='core2duo'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='core2duo-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='coreduo'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='coreduo-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='n270'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='n270-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='phenom'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='phenom-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <memoryBacking supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <enum name='sourceType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>file</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>anonymous</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>memfd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </memoryBacking>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <devices>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <disk supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='diskDevice'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>disk</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>cdrom</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>floppy</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>lun</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='bus'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ide</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>fdc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>scsi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>sata</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-non-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </disk>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <graphics supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vnc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>egl-headless</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dbus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </graphics>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <video supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='modelType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vga</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>cirrus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>none</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>bochs</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ramfb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </video>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <hostdev supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='mode'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>subsystem</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='startupPolicy'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>default</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>mandatory</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>requisite</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>optional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='subsysType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pci</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>scsi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='capsType'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='pciBackend'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </hostdev>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <rng supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-non-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>random</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>egd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>builtin</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </rng>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <filesystem supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='driverType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>path</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>handle</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtiofs</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </filesystem>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <tpm supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tpm-tis</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tpm-crb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>emulator</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>external</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendVersion'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>2.0</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </tpm>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <redirdev supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='bus'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </redirdev>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <channel supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pty</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>unix</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </channel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <crypto supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>qemu</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>builtin</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </crypto>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <interface supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>default</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>passt</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </interface>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <panic supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>isa</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>hyperv</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </panic>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <console supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>null</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pty</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dev</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>file</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pipe</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>stdio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>udp</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tcp</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>unix</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>qemu-vdagent</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dbus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </console>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </devices>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <gic supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <vmcoreinfo supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <genid supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <backingStoreInput supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <backup supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <async-teardown supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <s390-pv supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <ps2 supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <tdx supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <sev supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <sgx supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <hyperv supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='features'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>relaxed</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vapic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>spinlocks</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vpindex</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>runtime</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>synic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>stimer</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>reset</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vendor_id</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>frequencies</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>reenlightenment</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tlbflush</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ipi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>avic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>emsr_bitmap</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>xmm_input</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <defaults>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <spinlocks>4095</spinlocks>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <stimer_direct>on</stimer_direct>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </defaults>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </hyperv>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <launchSecurity supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </features>
Jan 27 18:55:31 compute-0 nova_compute[185480]: </domainCapabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.307 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 27 18:55:31 compute-0 nova_compute[185480]: <domainCapabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <domain>kvm</domain>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <arch>i686</arch>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <vcpu max='4096'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <iothreads supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <os supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <enum name='firmware'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <loader supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>rom</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pflash</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='readonly'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>yes</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>no</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='secure'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>no</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </loader>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </os>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='host-passthrough' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='hostPassthroughMigratable'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>on</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>off</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='maximum' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='maximumMigratable'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>on</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>off</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='host-model' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <vendor>AMD</vendor>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='x2apic'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='hypervisor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='stibp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='overflow-recov'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='succor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='lbrv'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc-scale'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='flushbyasid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='pause-filter'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='pfthreshold'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='disable' name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='custom' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='ClearwaterForest'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ddpd-u'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sha512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='ClearwaterForest-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ddpd-u'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sha512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Dhyana-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Turin'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbpb'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Turin-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbpb'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-128'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-256'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-128'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-256'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v6'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v7'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='KnightsMill'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512er'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512pf'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='KnightsMill-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512er'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512pf'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G4-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tbm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G5-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tbm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='athlon'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='athlon-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='core2duo'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='core2duo-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='coreduo'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='coreduo-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='n270'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='n270-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='phenom'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='phenom-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <memoryBacking supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <enum name='sourceType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>file</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>anonymous</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>memfd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </memoryBacking>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <devices>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <disk supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='diskDevice'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>disk</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>cdrom</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>floppy</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>lun</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='bus'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>fdc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>scsi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>sata</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-non-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </disk>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <graphics supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vnc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>egl-headless</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dbus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </graphics>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <video supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='modelType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vga</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>cirrus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>none</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>bochs</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ramfb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </video>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <hostdev supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='mode'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>subsystem</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='startupPolicy'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>default</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>mandatory</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>requisite</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>optional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='subsysType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pci</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>scsi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='capsType'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='pciBackend'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </hostdev>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <rng supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-non-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>random</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>egd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>builtin</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </rng>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <filesystem supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='driverType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>path</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>handle</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtiofs</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </filesystem>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <tpm supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tpm-tis</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tpm-crb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>emulator</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>external</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendVersion'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>2.0</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </tpm>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <redirdev supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='bus'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </redirdev>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <channel supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pty</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>unix</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </channel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <crypto supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>qemu</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>builtin</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </crypto>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <interface supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>default</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>passt</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </interface>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <panic supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>isa</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>hyperv</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </panic>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <console supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>null</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pty</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dev</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>file</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pipe</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>stdio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>udp</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tcp</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>unix</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>qemu-vdagent</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dbus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </console>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </devices>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <gic supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <vmcoreinfo supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <genid supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <backingStoreInput supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <backup supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <async-teardown supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <s390-pv supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <ps2 supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <tdx supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <sev supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <sgx supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <hyperv supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='features'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>relaxed</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vapic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>spinlocks</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vpindex</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>runtime</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>synic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>stimer</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>reset</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vendor_id</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>frequencies</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>reenlightenment</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tlbflush</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ipi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>avic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>emsr_bitmap</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>xmm_input</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <defaults>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <spinlocks>4095</spinlocks>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <stimer_direct>on</stimer_direct>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </defaults>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </hyperv>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <launchSecurity supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </features>
Jan 27 18:55:31 compute-0 nova_compute[185480]: </domainCapabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.364 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.369 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 27 18:55:31 compute-0 nova_compute[185480]: <domainCapabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <domain>kvm</domain>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <arch>x86_64</arch>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <vcpu max='240'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <iothreads supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <os supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <enum name='firmware'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <loader supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>rom</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pflash</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='readonly'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>yes</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>no</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='secure'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>no</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </loader>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </os>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='host-passthrough' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='hostPassthroughMigratable'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>on</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>off</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='maximum' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='maximumMigratable'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>on</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>off</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='host-model' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <vendor>AMD</vendor>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='x2apic'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='hypervisor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='stibp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='overflow-recov'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='succor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='lbrv'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc-scale'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='flushbyasid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='pause-filter'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='pfthreshold'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='disable' name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='custom' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='ClearwaterForest'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ddpd-u'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sha512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='ClearwaterForest-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ddpd-u'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sha512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Dhyana-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Turin'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbpb'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Turin-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbpb'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-128'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-256'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-128'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-256'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v6'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v7'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='KnightsMill'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512er'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512pf'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='KnightsMill-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512er'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512pf'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G4-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tbm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G5-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tbm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='athlon'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='athlon-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='core2duo'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='core2duo-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='coreduo'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='coreduo-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='n270'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='n270-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='phenom'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='phenom-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <memoryBacking supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <enum name='sourceType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>file</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>anonymous</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>memfd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </memoryBacking>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <devices>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <disk supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='diskDevice'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>disk</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>cdrom</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>floppy</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>lun</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='bus'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ide</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>fdc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>scsi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>sata</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-non-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </disk>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <graphics supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vnc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>egl-headless</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dbus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </graphics>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <video supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='modelType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vga</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>cirrus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>none</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>bochs</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ramfb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </video>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <hostdev supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='mode'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>subsystem</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='startupPolicy'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>default</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>mandatory</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>requisite</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>optional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='subsysType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pci</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>scsi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='capsType'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='pciBackend'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </hostdev>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <rng supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-non-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>random</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>egd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>builtin</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </rng>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <filesystem supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='driverType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>path</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>handle</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtiofs</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </filesystem>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <tpm supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tpm-tis</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tpm-crb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>emulator</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>external</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendVersion'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>2.0</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </tpm>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <redirdev supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='bus'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </redirdev>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <channel supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pty</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>unix</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </channel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <crypto supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>qemu</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>builtin</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </crypto>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <interface supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>default</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>passt</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </interface>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <panic supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>isa</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>hyperv</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </panic>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <console supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>null</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pty</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dev</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>file</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pipe</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>stdio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>udp</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tcp</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>unix</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>qemu-vdagent</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dbus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </console>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </devices>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <gic supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <vmcoreinfo supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <genid supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <backingStoreInput supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <backup supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <async-teardown supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <s390-pv supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <ps2 supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <tdx supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <sev supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <sgx supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <hyperv supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='features'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>relaxed</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vapic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>spinlocks</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vpindex</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>runtime</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>synic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>stimer</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>reset</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vendor_id</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>frequencies</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>reenlightenment</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tlbflush</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ipi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>avic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>emsr_bitmap</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>xmm_input</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <defaults>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <spinlocks>4095</spinlocks>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <stimer_direct>on</stimer_direct>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </defaults>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </hyperv>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <launchSecurity supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </features>
Jan 27 18:55:31 compute-0 nova_compute[185480]: </domainCapabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.457 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 27 18:55:31 compute-0 nova_compute[185480]: <domainCapabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <domain>kvm</domain>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <arch>x86_64</arch>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <vcpu max='4096'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <iothreads supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <os supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <enum name='firmware'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>efi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <loader supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>rom</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pflash</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='readonly'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>yes</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>no</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='secure'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>yes</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>no</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </loader>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </os>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='host-passthrough' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='hostPassthroughMigratable'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>on</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>off</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='maximum' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='maximumMigratable'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>on</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>off</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='host-model' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <vendor>AMD</vendor>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='x2apic'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='hypervisor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='stibp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='overflow-recov'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='succor'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='lbrv'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='tsc-scale'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='flushbyasid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='pause-filter'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='pfthreshold'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <feature policy='disable' name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <mode name='custom' supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Broadwell-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='ClearwaterForest'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ddpd-u'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sha512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='ClearwaterForest-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ddpd-u'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sha512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm3'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sm4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Cooperlake-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Denverton-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Dhyana-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Milan-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Rome-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Turin'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbpb'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-Turin-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amd-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='auto-ibrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vp2intersect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fs-gs-base-ns'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibpb-brtype'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='no-nested-data-bp'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='null-sel-clr-base'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='perfmon-v2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbpb'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='srso-user-kernel-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='stibp-always-on'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='EPYC-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-128'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-256'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='GraniteRapids-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-128'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-256'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx10-512'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='prefetchiti'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Haswell-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v6'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Icelake-Server-v7'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='IvyBridge-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='KnightsMill'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512er'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512pf'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='KnightsMill-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4fmaps'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-4vnniw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512er'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512pf'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G4-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tbm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Opteron_G5-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fma4'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tbm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xop'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SapphireRapids-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='amx-tile'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-bf16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-fp16'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512-vpopcntdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bitalg'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vbmi2'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrc'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fzrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='la57'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='taa-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='tsx-ldtrk'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='SierraForest-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ifma'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-ne-convert'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx-vnni-int8'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bhi-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='bus-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cmpccxadd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fbsdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='fsrs'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ibrs-all'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='intel-psfd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ipred-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='lam'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mcdt-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pbrsb-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='psdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rrsba-ctrl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='sbdr-ssdp-no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='serialize'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vaes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='vpclmulqdq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Client-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='hle'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='rtm'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Skylake-Server-v5'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512bw'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512cd'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512dq'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512f'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='avx512vl'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='invpcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pcid'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='pku'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='mpx'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v2'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v3'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='core-capability'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='split-lock-detect'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='Snowridge-v4'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='cldemote'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='erms'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='gfni'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdir64b'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='movdiri'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='xsaves'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='athlon'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='athlon-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='core2duo'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='core2duo-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='coreduo'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='coreduo-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='n270'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='n270-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='ss'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='phenom'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <blockers model='phenom-v1'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnow'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <feature name='3dnowext'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </blockers>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </mode>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </cpu>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <memoryBacking supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <enum name='sourceType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>file</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>anonymous</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <value>memfd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </memoryBacking>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <devices>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <disk supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='diskDevice'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>disk</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>cdrom</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>floppy</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>lun</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='bus'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>fdc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>scsi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>sata</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-non-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </disk>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <graphics supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vnc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>egl-headless</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dbus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </graphics>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <video supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='modelType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vga</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>cirrus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>none</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>bochs</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ramfb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </video>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <hostdev supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='mode'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>subsystem</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='startupPolicy'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>default</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>mandatory</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>requisite</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>optional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='subsysType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pci</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>scsi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='capsType'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='pciBackend'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </hostdev>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <rng supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtio-non-transitional</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>random</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>egd</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>builtin</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </rng>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <filesystem supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='driverType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>path</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>handle</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>virtiofs</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </filesystem>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <tpm supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tpm-tis</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tpm-crb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>emulator</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>external</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendVersion'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>2.0</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </tpm>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <redirdev supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='bus'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>usb</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </redirdev>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <channel supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pty</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>unix</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </channel>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <crypto supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>qemu</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendModel'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>builtin</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </crypto>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <interface supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='backendType'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>default</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>passt</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </interface>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <panic supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='model'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>isa</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>hyperv</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </panic>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <console supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='type'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>null</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vc</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pty</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dev</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>file</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>pipe</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>stdio</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>udp</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tcp</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>unix</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>qemu-vdagent</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>dbus</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </console>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </devices>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   <features>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <gic supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <vmcoreinfo supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <genid supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <backingStoreInput supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <backup supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <async-teardown supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <s390-pv supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <ps2 supported='yes'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <tdx supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <sev supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <sgx supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <hyperv supported='yes'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <enum name='features'>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>relaxed</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vapic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>spinlocks</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vpindex</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>runtime</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>synic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>stimer</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>reset</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>vendor_id</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>frequencies</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>reenlightenment</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>tlbflush</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>ipi</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>avic</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>emsr_bitmap</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <value>xmm_input</value>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </enum>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       <defaults>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <spinlocks>4095</spinlocks>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <stimer_direct>on</stimer_direct>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 18:55:31 compute-0 nova_compute[185480]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 18:55:31 compute-0 nova_compute[185480]:       </defaults>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     </hyperv>
Jan 27 18:55:31 compute-0 nova_compute[185480]:     <launchSecurity supported='no'/>
Jan 27 18:55:31 compute-0 nova_compute[185480]:   </features>
Jan 27 18:55:31 compute-0 nova_compute[185480]: </domainCapabilities>
Jan 27 18:55:31 compute-0 nova_compute[185480]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.547 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.547 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.548 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.554 185484 INFO nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Secure Boot support detected
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.558 185484 INFO nova.virt.libvirt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.558 185484 INFO nova.virt.libvirt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.570 185484 DEBUG nova.virt.libvirt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.610 185484 INFO nova.virt.node [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Determined node identity 8877e97b-aaf6-4210-a385-0f49c1a02906 from /var/lib/nova/compute_id
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.665 185484 WARNING nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Compute nodes ['8877e97b-aaf6-4210-a385-0f49c1a02906'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.708 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.792 185484 WARNING nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.793 185484 DEBUG oslo_concurrency.lockutils [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.793 185484 DEBUG oslo_concurrency.lockutils [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.793 185484 DEBUG oslo_concurrency.lockutils [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:55:31 compute-0 nova_compute[185480]: 2026-01-27 18:55:31.794 185484 DEBUG nova.compute.resource_tracker [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 18:55:31 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 18:55:31 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 27 18:55:32 compute-0 nova_compute[185480]: 2026-01-27 18:55:32.141 185484 WARNING nova.virt.libvirt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 18:55:32 compute-0 nova_compute[185480]: 2026-01-27 18:55:32.142 185484 DEBUG nova.compute.resource_tracker [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6012MB free_disk=72.6484146118164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 18:55:32 compute-0 nova_compute[185480]: 2026-01-27 18:55:32.142 185484 DEBUG oslo_concurrency.lockutils [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:55:32 compute-0 nova_compute[185480]: 2026-01-27 18:55:32.143 185484 DEBUG oslo_concurrency.lockutils [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:55:32 compute-0 nova_compute[185480]: 2026-01-27 18:55:32.163 185484 WARNING nova.compute.resource_tracker [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] No compute node record for compute-0.ctlplane.example.com:8877e97b-aaf6-4210-a385-0f49c1a02906: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 8877e97b-aaf6-4210-a385-0f49c1a02906 could not be found.
Jan 27 18:55:32 compute-0 nova_compute[185480]: 2026-01-27 18:55:32.186 185484 INFO nova.compute.resource_tracker [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 8877e97b-aaf6-4210-a385-0f49c1a02906
Jan 27 18:55:32 compute-0 nova_compute[185480]: 2026-01-27 18:55:32.250 185484 DEBUG nova.compute.resource_tracker [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 18:55:32 compute-0 nova_compute[185480]: 2026-01-27 18:55:32.251 185484 DEBUG nova.compute.resource_tracker [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.180 185484 INFO nova.scheduler.client.report [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [req-e1048a13-179a-412e-825b-f86821785214] Created resource provider record via placement API for resource provider with UUID 8877e97b-aaf6-4210-a385-0f49c1a02906 and name compute-0.ctlplane.example.com.
Jan 27 18:55:33 compute-0 podman[185803]: 2026-01-27 18:55:33.290967828 +0000 UTC m=+0.062233460 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:55:33 compute-0 podman[185802]: 2026-01-27 18:55:33.330903631 +0000 UTC m=+0.100140462 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.564 185484 DEBUG nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 27 18:55:33 compute-0 nova_compute[185480]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.565 185484 INFO nova.virt.libvirt.host [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] kernel doesn't support AMD SEV
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.566 185484 DEBUG nova.compute.provider_tree [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.566 185484 DEBUG nova.virt.libvirt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.699 185484 DEBUG nova.scheduler.client.report [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Updated inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.699 185484 DEBUG nova.compute.provider_tree [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Updating resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.699 185484 DEBUG nova.compute.provider_tree [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.851 185484 DEBUG nova.compute.provider_tree [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Updating resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.928 185484 DEBUG nova.compute.resource_tracker [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.929 185484 DEBUG oslo_concurrency.lockutils [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:55:33 compute-0 nova_compute[185480]: 2026-01-27 18:55:33.929 185484 DEBUG nova.service [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 27 18:55:34 compute-0 nova_compute[185480]: 2026-01-27 18:55:34.167 185484 DEBUG nova.service [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 27 18:55:34 compute-0 nova_compute[185480]: 2026-01-27 18:55:34.167 185484 DEBUG nova.servicegroup.drivers.db [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 27 18:55:35 compute-0 sshd-session[185847]: Accepted publickey for zuul from 192.168.122.31 port 55912 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:55:35 compute-0 systemd-logind[795]: New session 25 of user zuul.
Jan 27 18:55:35 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 27 18:55:35 compute-0 sshd-session[185847]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:55:36 compute-0 python3.9[186000]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 18:55:37 compute-0 nova_compute[185480]: 2026-01-27 18:55:37.171 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:55:37 compute-0 nova_compute[185480]: 2026-01-27 18:55:37.206 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:55:37 compute-0 sudo[186154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvoivvffeuqcuhiqyulvnfhgnrsaxhym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540137.2538114-31-88558219015426/AnsiballZ_systemd_service.py'
Jan 27 18:55:37 compute-0 sudo[186154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:38 compute-0 python3.9[186156]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:55:38 compute-0 systemd[1]: Reloading.
Jan 27 18:55:38 compute-0 systemd-sysv-generator[186187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:55:38 compute-0 systemd-rc-local-generator[186183]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:55:38 compute-0 sudo[186154]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:39 compute-0 python3.9[186341]: ansible-ansible.builtin.service_facts Invoked
Jan 27 18:55:39 compute-0 network[186358]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 18:55:39 compute-0 network[186359]: 'network-scripts' will be removed from distribution in near future.
Jan 27 18:55:39 compute-0 network[186360]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 18:55:43 compute-0 sudo[186630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjdcaerwfbiecbkonkerhwlqsxdyyleq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540142.9237952-50-261237131893903/AnsiballZ_systemd_service.py'
Jan 27 18:55:43 compute-0 sudo[186630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:43 compute-0 python3.9[186632]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:55:43 compute-0 sudo[186630]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:44 compute-0 sudo[186783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swrxupatpokvplbpvgsnndecwovtqmqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540143.8758733-60-81156360775527/AnsiballZ_file.py'
Jan 27 18:55:44 compute-0 sudo[186783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:44 compute-0 python3.9[186785]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:44 compute-0 sudo[186783]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:44 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 18:55:44 compute-0 sudo[186936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qymxvtahvwwaxupqcpwzqfimuizsmdqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540144.6754115-68-6478001134392/AnsiballZ_file.py'
Jan 27 18:55:44 compute-0 sudo[186936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:45 compute-0 python3.9[186938]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:45 compute-0 sudo[186936]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:46 compute-0 sudo[187088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezrjdvogbneunkzjqbtmkjcsebsowxcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540145.5686965-77-151412116096961/AnsiballZ_command.py'
Jan 27 18:55:46 compute-0 sudo[187088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:46 compute-0 python3.9[187090]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:55:46 compute-0 sudo[187088]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:47 compute-0 python3.9[187242]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 18:55:47 compute-0 sudo[187392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcbzmwnbpzpcxiysrenlkhvykuvohzvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540147.3601732-95-55081847094853/AnsiballZ_systemd_service.py'
Jan 27 18:55:47 compute-0 sudo[187392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:47 compute-0 python3.9[187394]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:55:48 compute-0 systemd[1]: Reloading.
Jan 27 18:55:48 compute-0 systemd-rc-local-generator[187421]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:55:48 compute-0 systemd-sysv-generator[187424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:55:48 compute-0 sudo[187392]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:48 compute-0 sudo[187578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcdxapwijgtokaciwrjudbigpachksef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540148.4861205-103-122540252466007/AnsiballZ_command.py'
Jan 27 18:55:48 compute-0 sudo[187578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:48 compute-0 python3.9[187580]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:55:48 compute-0 sudo[187578]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:49 compute-0 sudo[187731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rngnfqyvhyxuarvbtplluebwbrryqlgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540149.2286208-112-5236119117960/AnsiballZ_file.py'
Jan 27 18:55:49 compute-0 sudo[187731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:49 compute-0 python3.9[187733]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:55:49 compute-0 sudo[187731]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:50 compute-0 python3.9[187883]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:55:51 compute-0 sudo[188035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkqdxhsepeobtqwubrsqvfvvnknvsbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540150.782157-128-223318449725900/AnsiballZ_group.py'
Jan 27 18:55:51 compute-0 sudo[188035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:51 compute-0 python3.9[188037]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 27 18:55:51 compute-0 sudo[188035]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:52 compute-0 sudo[188187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msxyohplbzjxovquovllbhnzvmoyixbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540151.7831273-139-12774184087040/AnsiballZ_getent.py'
Jan 27 18:55:52 compute-0 sudo[188187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:52 compute-0 python3.9[188189]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 27 18:55:52 compute-0 sudo[188187]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:53 compute-0 sudo[188340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgdsnfzexpizlrwvtiniylopvdfwujwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540152.9151404-147-192634900001938/AnsiballZ_group.py'
Jan 27 18:55:53 compute-0 sudo[188340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:53 compute-0 python3.9[188342]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 18:55:53 compute-0 groupadd[188343]: group added to /etc/group: name=ceilometer, GID=42405
Jan 27 18:55:53 compute-0 groupadd[188343]: group added to /etc/gshadow: name=ceilometer
Jan 27 18:55:53 compute-0 groupadd[188343]: new group: name=ceilometer, GID=42405
Jan 27 18:55:53 compute-0 sudo[188340]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:54 compute-0 sudo[188498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpxsbcjqzxicajoejfrdqfaypazzzbbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540153.6987386-155-56381186653899/AnsiballZ_user.py'
Jan 27 18:55:54 compute-0 sudo[188498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:55:54 compute-0 python3.9[188500]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 18:55:54 compute-0 useradd[188502]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 18:55:54 compute-0 useradd[188502]: add 'ceilometer' to group 'libvirt'
Jan 27 18:55:54 compute-0 useradd[188502]: add 'ceilometer' to shadow group 'libvirt'
Jan 27 18:55:54 compute-0 sudo[188498]: pam_unix(sudo:session): session closed for user root
Jan 27 18:55:55 compute-0 python3.9[188658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:56 compute-0 python3.9[188779]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769540155.4732041-181-176173519684273/.source.conf _original_basename=ceilometer.conf follow=False checksum=806b21daa538a66a80669be8bf74c414d178dfbc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:57 compute-0 python3.9[188929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:57 compute-0 python3.9[189050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769540156.9886622-181-120235958911330/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:58 compute-0 python3.9[189201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:55:59 compute-0 python3.9[189322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769540158.1290329-181-62162315190214/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:55:59 compute-0 python3.9[189472]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:00 compute-0 python3.9[189624]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:01 compute-0 python3.9[189776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:01 compute-0 python3.9[189897]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540160.671103-240-157636869312681/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:02 compute-0 python3.9[190047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:03 compute-0 python3.9[190168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540162.0970483-240-152117853749638/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:03 compute-0 podman[190293]: 2026-01-27 18:56:03.681561631 +0000 UTC m=+0.074120048 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 18:56:03 compute-0 podman[190292]: 2026-01-27 18:56:03.723323162 +0000 UTC m=+0.115672664 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 27 18:56:03 compute-0 python3.9[190336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:04 compute-0 python3.9[190483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540163.3234513-269-255205750438780/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:05 compute-0 python3.9[190633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:05 compute-0 python3.9[190754]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540164.665157-285-146070393811663/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:06 compute-0 python3.9[190904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:07 compute-0 python3.9[191025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540165.9443665-300-171858301917142/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:07 compute-0 python3.9[191175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:08 compute-0 python3.9[191296]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540167.254237-315-224733483511536/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:08 compute-0 sudo[191446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcdjvuxsvkvgejsowwslhztmjjmsgbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540168.4769995-330-139372205085627/AnsiballZ_file.py'
Jan 27 18:56:08 compute-0 sudo[191446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:08 compute-0 python3.9[191448]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:08 compute-0 sudo[191446]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:09 compute-0 sudo[191598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxpbrnjhauxtatbvkhufpztzbkvtlinf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540169.1443741-338-198567514067294/AnsiballZ_file.py'
Jan 27 18:56:09 compute-0 sudo[191598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:09 compute-0 python3.9[191600]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:09 compute-0 sudo[191598]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:10 compute-0 python3.9[191750]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:11 compute-0 python3.9[191902]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:11 compute-0 python3.9[192054]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:12 compute-0 sudo[192206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymppneymnkwgkqqshflgwkutpdykxzly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540172.0973618-370-36579148770570/AnsiballZ_file.py'
Jan 27 18:56:12 compute-0 sudo[192206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:12 compute-0 python3.9[192208]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:12 compute-0 sudo[192206]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:13 compute-0 sudo[192358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvrphxlssmwdrvtcqwcttbgduaawmbrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540172.8371248-378-132192822613284/AnsiballZ_systemd_service.py'
Jan 27 18:56:13 compute-0 sudo[192358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:13 compute-0 python3.9[192360]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:56:13 compute-0 systemd[1]: Reloading.
Jan 27 18:56:13 compute-0 systemd-rc-local-generator[192391]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:56:13 compute-0 systemd-sysv-generator[192396]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:56:13 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 27 18:56:13 compute-0 sudo[192358]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:14 compute-0 sshd-session[192399]: Invalid user sol from 45.148.10.240 port 37948
Jan 27 18:56:14 compute-0 sshd-session[192399]: Connection closed by invalid user sol 45.148.10.240 port 37948 [preauth]
Jan 27 18:56:14 compute-0 sudo[192552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhtmvcbozctgvywpxoiuizonthdmoqji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540174.0437775-387-44945073533330/AnsiballZ_stat.py'
Jan 27 18:56:14 compute-0 sudo[192552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:14 compute-0 python3.9[192554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:14 compute-0 sudo[192552]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:14 compute-0 sudo[192675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itohhmbinmtszswkocgsynnxdchnqhax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540174.0437775-387-44945073533330/AnsiballZ_copy.py'
Jan 27 18:56:14 compute-0 sudo[192675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:15 compute-0 python3.9[192677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540174.0437775-387-44945073533330/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:15 compute-0 sudo[192675]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:15 compute-0 sudo[192751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyhrguxlbblihorkbtannmsydwmtdbjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540174.0437775-387-44945073533330/AnsiballZ_stat.py'
Jan 27 18:56:15 compute-0 sudo[192751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:15 compute-0 python3.9[192753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:15 compute-0 sudo[192751]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:15 compute-0 sudo[192874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hddtljwnwcxwzmhljdcxqmtrutnncrzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540174.0437775-387-44945073533330/AnsiballZ_copy.py'
Jan 27 18:56:15 compute-0 sudo[192874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:16 compute-0 python3.9[192876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540174.0437775-387-44945073533330/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:16 compute-0 sudo[192874]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:17 compute-0 sudo[193026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxbhmyxdyzsueqofycslhhqmmoezthvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540176.7738168-419-203118218758290/AnsiballZ_file.py'
Jan 27 18:56:17 compute-0 sudo[193026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:17 compute-0 python3.9[193028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:17 compute-0 sudo[193026]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:17 compute-0 sudo[193178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksarigbydpplakzuysxxpljscalakyil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540177.5233865-427-42470045484429/AnsiballZ_file.py'
Jan 27 18:56:17 compute-0 sudo[193178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:17 compute-0 python3.9[193180]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:17 compute-0 sudo[193178]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:18 compute-0 sudo[193330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odsjtyotowlxbpfzydbqbcbkgakaykmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540178.1838753-435-148787733004991/AnsiballZ_stat.py'
Jan 27 18:56:18 compute-0 sudo[193330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:18 compute-0 python3.9[193332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:18 compute-0 sudo[193330]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:19 compute-0 sudo[193453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znkliwyabzkzriqdplppkmtzdvwuqbht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540178.1838753-435-148787733004991/AnsiballZ_copy.py'
Jan 27 18:56:19 compute-0 sudo[193453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:19 compute-0 python3.9[193455]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540178.1838753-435-148787733004991/.source.json _original_basename=.ogatui9e follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:19 compute-0 sudo[193453]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:19 compute-0 python3.9[193605]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:56:20.494 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:56:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:56:20.495 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:56:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:56:20.496 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:56:21 compute-0 sudo[194026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyefiurfqiqftctseckjwwvcltokzeec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540181.5077517-475-189056957863569/AnsiballZ_container_config_data.py'
Jan 27 18:56:21 compute-0 sudo[194026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:22 compute-0 python3.9[194028]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 27 18:56:22 compute-0 sudo[194026]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:22 compute-0 sudo[194178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqikjmxcxydpscicirwxrpwmnmihruuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540182.5412-486-224314959697887/AnsiballZ_container_config_hash.py'
Jan 27 18:56:22 compute-0 sudo[194178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:23 compute-0 python3.9[194180]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:56:23 compute-0 sudo[194178]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:24 compute-0 sudo[194330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebuvyqfywjugwtnbrttarhdpatlfxzwh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540183.5110748-496-112466675252675/AnsiballZ_edpm_container_manage.py'
Jan 27 18:56:24 compute-0 sudo[194330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:24 compute-0 python3[194332]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:56:24 compute-0 podman[194370]: 2026-01-27 18:56:24.540244408 +0000 UTC m=+0.055667586 container create a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Jan 27 18:56:24 compute-0 podman[194370]: 2026-01-27 18:56:24.512329159 +0000 UTC m=+0.027752387 image pull 68a60f9093568ce7a1c5b4524fb1e8f03692d56fcec899fd30bbb31f7cc46992 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 27 18:56:24 compute-0 python3[194332]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 27 18:56:24 compute-0 sudo[194330]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:25 compute-0 sudo[194558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksqaojzotyfkfbqkcpwrxkagjhoupdak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540184.8547466-504-280397205448067/AnsiballZ_stat.py'
Jan 27 18:56:25 compute-0 sudo[194558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:25 compute-0 python3.9[194560]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:25 compute-0 sudo[194558]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:25 compute-0 sudo[194712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqklknppuxgtrdgmbodogrtquxmpkmdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540185.6160922-513-228802256315706/AnsiballZ_file.py'
Jan 27 18:56:25 compute-0 sudo[194712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:26 compute-0 python3.9[194714]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:26 compute-0 sudo[194712]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:26 compute-0 sudo[194788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmlewmllyovtpssoaktvupxzfwiezqkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540185.6160922-513-228802256315706/AnsiballZ_stat.py'
Jan 27 18:56:26 compute-0 sudo[194788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:26 compute-0 python3.9[194790]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:26 compute-0 sudo[194788]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:27 compute-0 sudo[194939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qttlqkrjlfmdfoznlqoiexgtrckbseoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540186.6341069-513-65595129862656/AnsiballZ_copy.py'
Jan 27 18:56:27 compute-0 sudo[194939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:27 compute-0 python3.9[194941]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769540186.6341069-513-65595129862656/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:27 compute-0 sudo[194939]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:27 compute-0 sudo[195015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufhbkijmwjexzzbimgmsfbsvalgzsnvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540186.6341069-513-65595129862656/AnsiballZ_systemd.py'
Jan 27 18:56:27 compute-0 sudo[195015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:28 compute-0 python3.9[195017]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:56:28 compute-0 systemd[1]: Reloading.
Jan 27 18:56:28 compute-0 systemd-rc-local-generator[195044]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:56:28 compute-0 systemd-sysv-generator[195047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:56:28 compute-0 sudo[195015]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:28 compute-0 sudo[195127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxmvsjfmqotebilxitcqhwlsfonepgfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540186.6341069-513-65595129862656/AnsiballZ_systemd.py'
Jan 27 18:56:28 compute-0 sudo[195127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:29 compute-0 python3.9[195129]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:56:30 compute-0 systemd[1]: Reloading.
Jan 27 18:56:30 compute-0 systemd-rc-local-generator[195157]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:56:30 compute-0 systemd-sysv-generator[195163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:56:30 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.518 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.518 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.519 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.532 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.532 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.533 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.533 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.534 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.534 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.534 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.534 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.534 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.563 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.563 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.563 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.564 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 18:56:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8e77dbf96dcd11983eb29df32685dd30b9993f62556fa0bead14dd6fb39492/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 18:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8e77dbf96dcd11983eb29df32685dd30b9993f62556fa0bead14dd6fb39492/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 18:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8e77dbf96dcd11983eb29df32685dd30b9993f62556fa0bead14dd6fb39492/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 18:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8e77dbf96dcd11983eb29df32685dd30b9993f62556fa0bead14dd6fb39492/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 18:56:30 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.
Jan 27 18:56:30 compute-0 podman[195169]: 2026-01-27 18:56:30.659092603 +0000 UTC m=+0.135956227 container init a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + sudo -E kolla_set_configs
Jan 27 18:56:30 compute-0 podman[195169]: 2026-01-27 18:56:30.681129043 +0000 UTC m=+0.157992667 container start a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 27 18:56:30 compute-0 sudo[195190]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: sudo: unable to send audit message: Operation not permitted
Jan 27 18:56:30 compute-0 sudo[195190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 18:56:30 compute-0 podman[195169]: ceilometer_agent_compute
Jan 27 18:56:30 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 27 18:56:30 compute-0 sudo[195127]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:30 compute-0 podman[195191]: 2026-01-27 18:56:30.743543365 +0000 UTC m=+0.052255361 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 18:56:30 compute-0 systemd[1]: a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4-77f4bde98b1aa939.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:56:30 compute-0 systemd[1]: a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4-77f4bde98b1aa939.service: Failed with result 'exit-code'.
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Validating config file
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Copying service configuration files
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: INFO:__main__:Writing out command to execute
Jan 27 18:56:30 compute-0 sudo[195190]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: ++ cat /run_command
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + ARGS=
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + sudo kolla_copy_cacerts
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.771 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.774 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5990MB free_disk=72.64550399780273GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.774 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.774 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:56:30 compute-0 sudo[195231]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: sudo: unable to send audit message: Operation not permitted
Jan 27 18:56:30 compute-0 sudo[195231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 18:56:30 compute-0 sudo[195231]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + [[ ! -n '' ]]
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + . kolla_extend_start
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + umask 0022
Jan 27 18:56:30 compute-0 ceilometer_agent_compute[195184]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.835 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.835 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.864 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.881 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.883 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 18:56:30 compute-0 nova_compute[185480]: 2026-01-27 18:56:30.883 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:56:31 compute-0 python3.9[195365]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.640 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.640 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.640 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.640 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.641 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.642 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.643 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.644 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.645 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.646 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.647 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.648 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.649 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.650 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.651 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.652 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.652 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.652 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.652 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.652 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.673 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.674 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.674 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.674 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.674 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.674 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.674 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.675 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.676 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.677 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.678 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.679 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.680 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.681 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.682 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.683 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.684 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.684 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.686 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.688 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.689 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.923 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.931 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.931 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 18:56:31 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:31.931 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.060 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.061 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.062 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.062 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.062 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.062 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.062 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.062 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.062 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.063 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.063 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.063 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.063 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.063 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.063 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.063 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.063 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.064 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.064 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.064 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.064 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.064 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.064 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.064 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.064 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.065 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.066 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.067 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.068 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.069 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.070 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.071 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.072 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.073 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.074 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.075 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.078 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.091 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.091 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.092 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:56:32.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:56:32 compute-0 sudo[195528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cchcujhccziigpqbfemqiaicugfjvosq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540192.0149055-558-185376314120718/AnsiballZ_stat.py'
Jan 27 18:56:32 compute-0 sudo[195528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:32 compute-0 python3.9[195530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:32 compute-0 sudo[195528]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:32 compute-0 sudo[195653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewxhmerwgroeqagvdmjuucpxyqvdzrsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540192.0149055-558-185376314120718/AnsiballZ_copy.py'
Jan 27 18:56:32 compute-0 sudo[195653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:33 compute-0 python3.9[195655]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540192.0149055-558-185376314120718/.source.yaml _original_basename=.q0s4sfbl follow=False checksum=169f2ad74d82571000a1da4a310f45e024f27d77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:33 compute-0 sudo[195653]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:33 compute-0 sudo[195805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfqlrbmpxarlzpvvsxeyslwmggfaysmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540193.2590625-573-180180137243428/AnsiballZ_stat.py'
Jan 27 18:56:33 compute-0 sudo[195805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:33 compute-0 python3.9[195807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:33 compute-0 sudo[195805]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:34 compute-0 sudo[195955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzjhvlwqynudkvgvaytfaccdouhxcmvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540193.2590625-573-180180137243428/AnsiballZ_copy.py'
Jan 27 18:56:34 compute-0 sudo[195955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:34 compute-0 podman[195903]: 2026-01-27 18:56:34.177895637 +0000 UTC m=+0.064720792 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 18:56:34 compute-0 podman[195902]: 2026-01-27 18:56:34.208063625 +0000 UTC m=+0.095987965 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 18:56:34 compute-0 python3.9[195966]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540193.2590625-573-180180137243428/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:34 compute-0 sudo[195955]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:35 compute-0 sudo[196126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxhbfqogdqwmnlgyflkwgwoizkzoumsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540194.8573804-594-252683718013886/AnsiballZ_file.py'
Jan 27 18:56:35 compute-0 sudo[196126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:35 compute-0 python3.9[196128]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:35 compute-0 sudo[196126]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:35 compute-0 sudo[196278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfsohmwsjslqlqpsxztjrokznknfotx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540195.53986-602-78031133033759/AnsiballZ_file.py'
Jan 27 18:56:35 compute-0 sudo[196278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:35 compute-0 python3.9[196280]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:36 compute-0 sudo[196278]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:36 compute-0 sudo[196430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikmmogoqrznixvpovhsmjwrphpccezl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540196.1660426-610-277580334284028/AnsiballZ_stat.py'
Jan 27 18:56:36 compute-0 sudo[196430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:36 compute-0 python3.9[196432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:36 compute-0 sudo[196430]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:36 compute-0 sudo[196508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzkwnwgwifcpeqfsewxqnmhtcohshjiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540196.1660426-610-277580334284028/AnsiballZ_file.py'
Jan 27 18:56:36 compute-0 sudo[196508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:37 compute-0 python3.9[196510]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.s4n3j5ea recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:37 compute-0 sudo[196508]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:37 compute-0 python3.9[196660]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:39 compute-0 sudo[197081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfdooyfdhnlihmvxebgayyrtzvofddgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540199.3470654-647-144907690389398/AnsiballZ_container_config_data.py'
Jan 27 18:56:39 compute-0 sudo[197081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:39 compute-0 python3.9[197083]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 27 18:56:39 compute-0 sudo[197081]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:40 compute-0 sudo[197233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imrziqtghmrxpqkknbiuendplljaqbln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540200.1664646-658-3783762609451/AnsiballZ_container_config_hash.py'
Jan 27 18:56:40 compute-0 sudo[197233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:40 compute-0 python3.9[197235]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:56:40 compute-0 sudo[197233]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:41 compute-0 sudo[197385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lozblvufpdunvjiwsgejblfjeulvyepp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540200.978892-668-141549736932429/AnsiballZ_edpm_container_manage.py'
Jan 27 18:56:41 compute-0 sudo[197385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:41 compute-0 python3[197387]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:56:41 compute-0 podman[197423]: 2026-01-27 18:56:41.750981124 +0000 UTC m=+0.046049926 container create 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 18:56:41 compute-0 podman[197423]: 2026-01-27 18:56:41.727388919 +0000 UTC m=+0.022457751 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 27 18:56:41 compute-0 python3[197387]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 27 18:56:41 compute-0 sudo[197385]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:42 compute-0 sudo[197611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhcxmhioiowdreblcdcmdshfhgusoixq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540202.0504384-676-52248640010577/AnsiballZ_stat.py'
Jan 27 18:56:42 compute-0 sudo[197611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:42 compute-0 python3.9[197613]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:42 compute-0 sudo[197611]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:43 compute-0 sudo[197765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oghdrbgadhdigcxuowhvmqlhflqrcste ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540202.808849-685-224916693618399/AnsiballZ_file.py'
Jan 27 18:56:43 compute-0 sudo[197765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:43 compute-0 python3.9[197767]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:43 compute-0 sudo[197765]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:43 compute-0 sudo[197841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrwetjlswaydejwyzuwfwyicgdklgnoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540202.808849-685-224916693618399/AnsiballZ_stat.py'
Jan 27 18:56:43 compute-0 sudo[197841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:43 compute-0 python3.9[197843]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:56:43 compute-0 sudo[197841]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:44 compute-0 sudo[197992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjamyqrgqaskcdkvubkbzvsbtslotar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540203.9919093-685-243438008092512/AnsiballZ_copy.py'
Jan 27 18:56:44 compute-0 sudo[197992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:44 compute-0 python3.9[197994]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769540203.9919093-685-243438008092512/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:44 compute-0 sudo[197992]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:44 compute-0 sudo[198068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywkrkfejeghzswyasfkwpmfzjvyupuyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540203.9919093-685-243438008092512/AnsiballZ_systemd.py'
Jan 27 18:56:44 compute-0 sudo[198068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:45 compute-0 python3.9[198070]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:56:45 compute-0 systemd[1]: Reloading.
Jan 27 18:56:45 compute-0 systemd-sysv-generator[198102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:56:45 compute-0 systemd-rc-local-generator[198097]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:56:45 compute-0 sudo[198068]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:45 compute-0 sudo[198180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vecuybmodfezbozfeiczobtwhtjojkda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540203.9919093-685-243438008092512/AnsiballZ_systemd.py'
Jan 27 18:56:45 compute-0 sudo[198180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:46 compute-0 python3.9[198182]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:56:46 compute-0 systemd[1]: Reloading.
Jan 27 18:56:46 compute-0 systemd-sysv-generator[198215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:56:46 compute-0 systemd-rc-local-generator[198211]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:56:46 compute-0 systemd[1]: Starting node_exporter container...
Jan 27 18:56:46 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b5f527202a0ee477d544794a48bea4bb0e2b2a47b56454b70681ffb8a4084c/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 18:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b5f527202a0ee477d544794a48bea4bb0e2b2a47b56454b70681ffb8a4084c/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 18:56:46 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.
Jan 27 18:56:46 compute-0 podman[198222]: 2026-01-27 18:56:46.667159222 +0000 UTC m=+0.129042737 container init 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.682Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.682Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.682Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.682Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.682Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=arp
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=bcache
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=bonding
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=cpu
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=edac
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=filefd
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=netclass
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=netdev
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=netstat
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=nfs
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.683Z caller=node_exporter.go:117 level=info collector=nvme
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=softnet
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=systemd
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=xfs
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=node_exporter.go:117 level=info collector=zfs
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.684Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 27 18:56:46 compute-0 node_exporter[198237]: ts=2026-01-27T18:56:46.685Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 27 18:56:46 compute-0 podman[198222]: 2026-01-27 18:56:46.6943974 +0000 UTC m=+0.156280885 container start 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 18:56:46 compute-0 podman[198222]: node_exporter
Jan 27 18:56:46 compute-0 systemd[1]: Started node_exporter container.
Jan 27 18:56:46 compute-0 sudo[198180]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:46 compute-0 podman[198246]: 2026-01-27 18:56:46.753342384 +0000 UTC m=+0.051912315 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 18:56:46 compute-0 rsyslogd[1006]: imjournal from <np0005597875:node_exporter>: begin to drop messages due to rate-limiting
Jan 27 18:56:47 compute-0 python3.9[198420]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 18:56:48 compute-0 sudo[198570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrffegkybnakzypkfxcfgttfooinxyfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540208.0618558-730-226647818874159/AnsiballZ_stat.py'
Jan 27 18:56:48 compute-0 sudo[198570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:48 compute-0 python3.9[198572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:48 compute-0 sudo[198570]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:48 compute-0 sudo[198695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teloixfsedkeydpklygnvujjvxgwucub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540208.0618558-730-226647818874159/AnsiballZ_copy.py'
Jan 27 18:56:48 compute-0 sudo[198695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:49 compute-0 python3.9[198697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540208.0618558-730-226647818874159/.source.yaml _original_basename=.p5zp7f30 follow=False checksum=f25bd5ab1ec073a5287e554a377d6e269abb148e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:49 compute-0 sudo[198695]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:49 compute-0 sudo[198847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heaschzzmhafgbfuuketpdycigmoaaah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540209.4471624-745-43495271038594/AnsiballZ_stat.py'
Jan 27 18:56:49 compute-0 sudo[198847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:49 compute-0 python3.9[198849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:49 compute-0 sudo[198847]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:50 compute-0 sudo[198970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmvglqczhzrwzctqdvvarideisinwlue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540209.4471624-745-43495271038594/AnsiballZ_copy.py'
Jan 27 18:56:50 compute-0 sudo[198970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:50 compute-0 python3.9[198972]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540209.4471624-745-43495271038594/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:50 compute-0 sudo[198970]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:51 compute-0 sudo[199122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgerjavbcxremlvhhmzomctdwrpmkbln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540211.0400326-766-138892231500/AnsiballZ_file.py'
Jan 27 18:56:51 compute-0 sudo[199122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:51 compute-0 python3.9[199124]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:51 compute-0 sudo[199122]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:52 compute-0 sudo[199274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aebmrjlbjqsmiydvubuikjwqwgzzkbmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540211.7150166-774-141445184616569/AnsiballZ_file.py'
Jan 27 18:56:52 compute-0 sudo[199274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:52 compute-0 python3.9[199276]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:56:52 compute-0 sudo[199274]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:52 compute-0 sudo[199426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ustsupolpzniqxaduarlvgugkotdqhgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540212.5671227-782-41245968519813/AnsiballZ_stat.py'
Jan 27 18:56:52 compute-0 sudo[199426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:53 compute-0 python3.9[199428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:56:53 compute-0 sudo[199426]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:53 compute-0 sudo[199504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wulrpgljilnkbbzitrizqyeahwykdtbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540212.5671227-782-41245968519813/AnsiballZ_file.py'
Jan 27 18:56:53 compute-0 sudo[199504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:53 compute-0 python3.9[199506]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.x016pwmu recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:53 compute-0 sudo[199504]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:54 compute-0 python3.9[199656]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:56:56 compute-0 sudo[200077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqrvmomeirxjxwnplbccqfrbqjwbkoha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540215.8036325-819-165970389093078/AnsiballZ_container_config_data.py'
Jan 27 18:56:56 compute-0 sudo[200077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:56 compute-0 python3.9[200079]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 27 18:56:56 compute-0 sudo[200077]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:56 compute-0 sudo[200229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luioahysyescgfwujdiydqpxzqokamzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540216.6553464-830-265062130119433/AnsiballZ_container_config_hash.py'
Jan 27 18:56:56 compute-0 sudo[200229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:57 compute-0 python3.9[200231]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:56:57 compute-0 sudo[200229]: pam_unix(sudo:session): session closed for user root
Jan 27 18:56:57 compute-0 sudo[200381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncyoewpqmjrtplzrzasnbstvqocviasg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540217.4851704-840-152669156256212/AnsiballZ_edpm_container_manage.py'
Jan 27 18:56:57 compute-0 sudo[200381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:56:58 compute-0 python3[200383]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:56:59 compute-0 podman[200396]: 2026-01-27 18:56:59.69741246 +0000 UTC m=+1.518479534 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 27 18:56:59 compute-0 podman[200492]: 2026-01-27 18:56:59.856004671 +0000 UTC m=+0.049998989 container create 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter)
Jan 27 18:56:59 compute-0 podman[200492]: 2026-01-27 18:56:59.829620184 +0000 UTC m=+0.023614522 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 27 18:56:59 compute-0 python3[200383]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 27 18:56:59 compute-0 sudo[200381]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:00 compute-0 sudo[200677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rffmlmwkkwotrxhkshpwkvmzrbrbmlze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540220.1398978-848-64850053535339/AnsiballZ_stat.py'
Jan 27 18:57:00 compute-0 sudo[200677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:00 compute-0 python3.9[200679]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:57:00 compute-0 sudo[200677]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:01 compute-0 sudo[200846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beoofrfimfvjwdwtidkqcopynudebyin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540220.9609046-857-65688606204277/AnsiballZ_file.py'
Jan 27 18:57:01 compute-0 sudo[200846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:01 compute-0 podman[200805]: 2026-01-27 18:57:01.294548133 +0000 UTC m=+0.072655336 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, io.buildah.version=1.41.4, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 18:57:01 compute-0 systemd[1]: a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4-77f4bde98b1aa939.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:57:01 compute-0 systemd[1]: a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4-77f4bde98b1aa939.service: Failed with result 'exit-code'.
Jan 27 18:57:01 compute-0 python3.9[200852]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:01 compute-0 sudo[200846]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:01 compute-0 sudo[200926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ertoiiuuebzwvcxwwxdegvumrrrdthkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540220.9609046-857-65688606204277/AnsiballZ_stat.py'
Jan 27 18:57:01 compute-0 sudo[200926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:01 compute-0 python3.9[200928]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:57:01 compute-0 sudo[200926]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:02 compute-0 sudo[201077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cegadinfxfrqjhmlzfnrxjpjfuhxuhkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540222.0524504-857-120748956968885/AnsiballZ_copy.py'
Jan 27 18:57:02 compute-0 sudo[201077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:02 compute-0 python3.9[201079]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769540222.0524504-857-120748956968885/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:02 compute-0 sudo[201077]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:02 compute-0 sudo[201153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohllupwbjldphzdptdhvpoudweqkqjyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540222.0524504-857-120748956968885/AnsiballZ_systemd.py'
Jan 27 18:57:02 compute-0 sudo[201153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:03 compute-0 python3.9[201155]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:57:03 compute-0 systemd[1]: Reloading.
Jan 27 18:57:03 compute-0 systemd-rc-local-generator[201178]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:57:03 compute-0 systemd-sysv-generator[201181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:57:03 compute-0 sudo[201153]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:03 compute-0 sudo[201264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpdzzyuhdpoxnnxorszuojchavtddtaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540222.0524504-857-120748956968885/AnsiballZ_systemd.py'
Jan 27 18:57:03 compute-0 sudo[201264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:04 compute-0 podman[201267]: 2026-01-27 18:57:04.286582665 +0000 UTC m=+0.065742899 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 18:57:04 compute-0 python3.9[201266]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:57:04 compute-0 systemd[1]: Reloading.
Jan 27 18:57:04 compute-0 podman[201289]: 2026-01-27 18:57:04.484365202 +0000 UTC m=+0.159443372 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 18:57:04 compute-0 systemd-rc-local-generator[201341]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:57:04 compute-0 systemd-sysv-generator[201344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:57:04 compute-0 systemd[1]: Starting podman_exporter container...
Jan 27 18:57:04 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79f3f30886b0a73f8bfa5ae251923c29a09eec163d1110ff4f581d643d966f3/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 18:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79f3f30886b0a73f8bfa5ae251923c29a09eec163d1110ff4f581d643d966f3/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 18:57:04 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.
Jan 27 18:57:04 compute-0 podman[201351]: 2026-01-27 18:57:04.914796598 +0000 UTC m=+0.150471336 container init 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 18:57:04 compute-0 podman_exporter[201367]: ts=2026-01-27T18:57:04.940Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 27 18:57:04 compute-0 podman_exporter[201367]: ts=2026-01-27T18:57:04.940Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 27 18:57:04 compute-0 podman_exporter[201367]: ts=2026-01-27T18:57:04.940Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 27 18:57:04 compute-0 podman_exporter[201367]: ts=2026-01-27T18:57:04.940Z caller=handler.go:105 level=info collector=container
Jan 27 18:57:04 compute-0 podman[201351]: 2026-01-27 18:57:04.955984982 +0000 UTC m=+0.191659710 container start 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 18:57:04 compute-0 systemd[1]: Starting Podman API Service...
Jan 27 18:57:04 compute-0 podman[201351]: podman_exporter
Jan 27 18:57:04 compute-0 systemd[1]: Started Podman API Service.
Jan 27 18:57:04 compute-0 systemd[1]: Started podman_exporter container.
Jan 27 18:57:05 compute-0 podman[201378]: time="2026-01-27T18:57:05Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 27 18:57:05 compute-0 podman[201378]: time="2026-01-27T18:57:05Z" level=info msg="Setting parallel job count to 25"
Jan 27 18:57:05 compute-0 podman[201378]: time="2026-01-27T18:57:05Z" level=info msg="Using sqlite as database backend"
Jan 27 18:57:05 compute-0 sudo[201264]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:05 compute-0 podman[201378]: time="2026-01-27T18:57:05Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 27 18:57:05 compute-0 podman[201378]: time="2026-01-27T18:57:05Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 27 18:57:05 compute-0 podman[201378]: time="2026-01-27T18:57:05Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 27 18:57:05 compute-0 podman[201378]: @ - - [27/Jan/2026:18:57:05 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 27 18:57:05 compute-0 podman[201378]: time="2026-01-27T18:57:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 18:57:05 compute-0 podman[201376]: 2026-01-27 18:57:05.038502645 +0000 UTC m=+0.064357826 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 18:57:05 compute-0 systemd[1]: 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39-2e480011b7d99a47.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:57:05 compute-0 systemd[1]: 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39-2e480011b7d99a47.service: Failed with result 'exit-code'.
Jan 27 18:57:05 compute-0 podman[201378]: @ - - [27/Jan/2026:18:57:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18094 "" "Go-http-client/1.1"
Jan 27 18:57:05 compute-0 podman_exporter[201367]: ts=2026-01-27T18:57:05.057Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 27 18:57:05 compute-0 podman_exporter[201367]: ts=2026-01-27T18:57:05.058Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 27 18:57:05 compute-0 podman_exporter[201367]: ts=2026-01-27T18:57:05.058Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 27 18:57:05 compute-0 python3.9[201566]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 18:57:06 compute-0 auditd[703]: Audit daemon rotating log files
Jan 27 18:57:06 compute-0 sudo[201716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgacropubbbbnqjmqcyytiofrwokgkkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540226.3163116-902-46890037915540/AnsiballZ_stat.py'
Jan 27 18:57:06 compute-0 sudo[201716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:06 compute-0 python3.9[201718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:06 compute-0 sudo[201716]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:07 compute-0 sudo[201841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ankfaqjdkcaztovugbfzkgieadznvoiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540226.3163116-902-46890037915540/AnsiballZ_copy.py'
Jan 27 18:57:07 compute-0 sudo[201841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:07 compute-0 python3.9[201843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540226.3163116-902-46890037915540/.source.yaml _original_basename=.jxkrq5c0 follow=False checksum=8e2bfdf95c63e6ed9e8f6a466e9d0ea8c9b276bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:07 compute-0 sudo[201841]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:07 compute-0 sudo[201993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrupqvluvlxnkzraqgloipubifvzwgaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540227.5083249-917-203277804696053/AnsiballZ_stat.py'
Jan 27 18:57:07 compute-0 sudo[201993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:07 compute-0 python3.9[201995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:08 compute-0 sudo[201993]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:08 compute-0 sudo[202116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bslokkbnqrsmkhzuvrcvtlhbylfkpiqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540227.5083249-917-203277804696053/AnsiballZ_copy.py'
Jan 27 18:57:08 compute-0 sudo[202116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:08 compute-0 python3.9[202118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540227.5083249-917-203277804696053/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:57:08 compute-0 sudo[202116]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:09 compute-0 sudo[202268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mijbmaranlchdrsibtritdmmlwaqcwxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540229.134372-938-233852797082535/AnsiballZ_file.py'
Jan 27 18:57:09 compute-0 sudo[202268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:09 compute-0 python3.9[202270]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:09 compute-0 sudo[202268]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:10 compute-0 sudo[202420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmgdeldcaszsafxpukbybwmlphrxijx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540229.8471112-946-31694755219066/AnsiballZ_file.py'
Jan 27 18:57:10 compute-0 sudo[202420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:10 compute-0 python3.9[202422]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:57:10 compute-0 sudo[202420]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:10 compute-0 sudo[202572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungjnoakdtawathszvrevestlxsrdryk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540230.5922613-954-74704873976226/AnsiballZ_stat.py'
Jan 27 18:57:10 compute-0 sudo[202572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:11 compute-0 python3.9[202574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:11 compute-0 sudo[202572]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:11 compute-0 sudo[202650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnsbkbznxsitaoynydfnipzyofkyurw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540230.5922613-954-74704873976226/AnsiballZ_file.py'
Jan 27 18:57:11 compute-0 sudo[202650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:11 compute-0 python3.9[202652]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.oojro1qa recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:11 compute-0 sudo[202650]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:12 compute-0 python3.9[202802]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:14 compute-0 sudo[203223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkywmimlllbqvxfflqzvcbudjiljwlcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540233.8174717-991-211368755162153/AnsiballZ_container_config_data.py'
Jan 27 18:57:14 compute-0 sudo[203223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:14 compute-0 python3.9[203225]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 27 18:57:14 compute-0 sudo[203223]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:14 compute-0 sudo[203375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goearpvdtldzyjisizetpcyjixmjcgrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540234.6662433-1002-240511069358436/AnsiballZ_container_config_hash.py'
Jan 27 18:57:14 compute-0 sudo[203375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:15 compute-0 python3.9[203377]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:57:15 compute-0 sudo[203375]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:15 compute-0 sudo[203527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnncxmmnwhrgccewwvauobmknvpnrqmi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540235.7168093-1012-98467567451491/AnsiballZ_edpm_container_manage.py'
Jan 27 18:57:15 compute-0 sudo[203527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:16 compute-0 python3[203529]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:57:17 compute-0 podman[203567]: 2026-01-27 18:57:17.286153722 +0000 UTC m=+0.053803710 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 18:57:18 compute-0 podman[203541]: 2026-01-27 18:57:18.665367533 +0000 UTC m=+2.325078165 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 18:57:18 compute-0 podman[203662]: 2026-01-27 18:57:18.848945126 +0000 UTC m=+0.058864952 container create 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, release=1755695350)
Jan 27 18:57:18 compute-0 podman[203662]: 2026-01-27 18:57:18.817079797 +0000 UTC m=+0.026999653 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 18:57:18 compute-0 python3[203529]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 18:57:19 compute-0 sudo[203527]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:19 compute-0 sudo[203849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcmogskbavnopseddeeemzxflkkjlnfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540239.2111151-1020-76678487144493/AnsiballZ_stat.py'
Jan 27 18:57:19 compute-0 sudo[203849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:19 compute-0 python3.9[203851]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:57:19 compute-0 sudo[203849]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:20 compute-0 sudo[204003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aajukdathtbjmtfxqkgwhjaxxxokjncq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540240.1290119-1029-115821855839498/AnsiballZ_file.py'
Jan 27 18:57:20 compute-0 sudo[204003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:57:20.495 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:57:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:57:20.496 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:57:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:57:20.496 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:57:20 compute-0 python3.9[204005]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:20 compute-0 sudo[204003]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:20 compute-0 sudo[204079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gilzmmfqwggbpscrhlnezkgrjgwejixm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540240.1290119-1029-115821855839498/AnsiballZ_stat.py'
Jan 27 18:57:20 compute-0 sudo[204079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:20 compute-0 python3.9[204081]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:57:20 compute-0 sudo[204079]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:21 compute-0 sudo[204230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkdcvkjfgqwlcwxinbyntddcibnxefvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540241.0565047-1029-56089460054866/AnsiballZ_copy.py'
Jan 27 18:57:21 compute-0 sudo[204230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:21 compute-0 python3.9[204232]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769540241.0565047-1029-56089460054866/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:21 compute-0 sudo[204230]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:22 compute-0 sudo[204306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkdxedglawwlkgypsooddffpftimvism ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540241.0565047-1029-56089460054866/AnsiballZ_systemd.py'
Jan 27 18:57:22 compute-0 sudo[204306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:22 compute-0 python3.9[204308]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:57:22 compute-0 systemd[1]: Reloading.
Jan 27 18:57:22 compute-0 systemd-rc-local-generator[204328]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:57:22 compute-0 systemd-sysv-generator[204334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:57:22 compute-0 sudo[204306]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:23 compute-0 sudo[204417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzewqpvpicspwnlkqsqjpjckpzzvoffr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540241.0565047-1029-56089460054866/AnsiballZ_systemd.py'
Jan 27 18:57:23 compute-0 sudo[204417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:23 compute-0 python3.9[204419]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:57:23 compute-0 systemd[1]: Reloading.
Jan 27 18:57:23 compute-0 systemd-rc-local-generator[204450]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:57:23 compute-0 systemd-sysv-generator[204453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:57:23 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 27 18:57:23 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f057f0d2ff65c5d5e48a6ef8ee60a8e89e44ff049d031c687474a8afb28b86/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 27 18:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f057f0d2ff65c5d5e48a6ef8ee60a8e89e44ff049d031c687474a8afb28b86/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 18:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f057f0d2ff65c5d5e48a6ef8ee60a8e89e44ff049d031c687474a8afb28b86/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 18:57:23 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.
Jan 27 18:57:23 compute-0 podman[204460]: 2026-01-27 18:57:23.949875761 +0000 UTC m=+0.162556477 container init 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container)
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *bridge.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *coverage.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *datapath.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *iface.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *memory.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *ovn.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *pmd_perf.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *pmd_rxq.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: INFO    18:57:23 main.go:48: registering *vswitch.Collector
Jan 27 18:57:23 compute-0 openstack_network_exporter[204477]: NOTICE  18:57:23 main.go:76: listening on https://:9105/metrics
Jan 27 18:57:23 compute-0 podman[204460]: 2026-01-27 18:57:23.972501437 +0000 UTC m=+0.185182133 container start 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 18:57:23 compute-0 podman[204460]: openstack_network_exporter
Jan 27 18:57:23 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 27 18:57:24 compute-0 sudo[204417]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:24 compute-0 podman[204487]: 2026-01-27 18:57:24.052704574 +0000 UTC m=+0.070586825 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, architecture=x86_64, container_name=openstack_network_exporter)
Jan 27 18:57:24 compute-0 python3.9[204660]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 18:57:25 compute-0 sudo[204810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eobniksdfcavqyhbrchtiigmnexiyrxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540245.2443035-1074-165462472759157/AnsiballZ_stat.py'
Jan 27 18:57:25 compute-0 sudo[204810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:25 compute-0 python3.9[204812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:25 compute-0 sudo[204810]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:26 compute-0 sudo[204935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufqxokqqgozlykajcabndnnrstjtznfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540245.2443035-1074-165462472759157/AnsiballZ_copy.py'
Jan 27 18:57:26 compute-0 sudo[204935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:26 compute-0 python3.9[204937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540245.2443035-1074-165462472759157/.source.yaml _original_basename=.fn5zra57 follow=False checksum=101e5e10eb449c283320602da664e5ace152bc3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:26 compute-0 sudo[204935]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:26 compute-0 sudo[205087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqskgvdkxofatzyzbmimewnoivgyyneq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540246.600343-1089-73831188790530/AnsiballZ_find.py'
Jan 27 18:57:26 compute-0 sudo[205087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:27 compute-0 python3.9[205089]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 18:57:27 compute-0 sudo[205087]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:27 compute-0 sudo[205239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctcahrtmmjxqbaqxrbplwsjxdjhtdye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540247.4559689-1099-276646895077377/AnsiballZ_podman_container_info.py'
Jan 27 18:57:27 compute-0 sudo[205239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:28 compute-0 python3.9[205241]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 27 18:57:28 compute-0 sudo[205239]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:28 compute-0 sudo[205405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pieswrgmhudatkxzaqiwshegjqzmiksa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540248.4693918-1107-109662848581835/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:28 compute-0 sudo[205405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:29 compute-0 python3.9[205407]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:29 compute-0 systemd[1]: Started libpod-conmon-94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.scope.
Jan 27 18:57:29 compute-0 podman[205408]: 2026-01-27 18:57:29.304163494 +0000 UTC m=+0.117517409 container exec 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller)
Jan 27 18:57:29 compute-0 podman[205408]: 2026-01-27 18:57:29.315154 +0000 UTC m=+0.128507935 container exec_died 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 18:57:29 compute-0 systemd[1]: libpod-conmon-94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.scope: Deactivated successfully.
Jan 27 18:57:29 compute-0 sudo[205405]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:29 compute-0 sudo[205590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkpmoqtwgkngehlbeseonupzblpshcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540249.5448616-1115-251063833581360/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:29 compute-0 sudo[205590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:30 compute-0 python3.9[205592]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:30 compute-0 systemd[1]: Started libpod-conmon-94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.scope.
Jan 27 18:57:30 compute-0 podman[205593]: 2026-01-27 18:57:30.163400606 +0000 UTC m=+0.082182186 container exec 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 18:57:30 compute-0 podman[205593]: 2026-01-27 18:57:30.198136585 +0000 UTC m=+0.116918165 container exec_died 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:57:30 compute-0 systemd[1]: libpod-conmon-94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.scope: Deactivated successfully.
Jan 27 18:57:30 compute-0 sudo[205590]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:30 compute-0 sudo[205776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygcyvjbviuahpzqusprtzsuqhutblgxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540250.415497-1123-114908368184821/AnsiballZ_file.py'
Jan 27 18:57:30 compute-0 sudo[205776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:30 compute-0 python3.9[205778]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:30 compute-0 sudo[205776]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.875 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.900 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.900 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.900 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.913 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.913 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.914 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.914 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:30 compute-0 nova_compute[185480]: 2026-01-27 18:57:30.914 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 18:57:31 compute-0 sudo[205940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbciudzcxdvyzkecambsymodaihyfeop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540251.1092172-1132-90294186823937/AnsiballZ_podman_container_info.py'
Jan 27 18:57:31 compute-0 sudo[205940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:31 compute-0 podman[205902]: 2026-01-27 18:57:31.417016582 +0000 UTC m=+0.065902832 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0)
Jan 27 18:57:31 compute-0 systemd[1]: a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4-77f4bde98b1aa939.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:57:31 compute-0 systemd[1]: a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4-77f4bde98b1aa939.service: Failed with result 'exit-code'.
Jan 27 18:57:31 compute-0 nova_compute[185480]: 2026-01-27 18:57:31.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:31 compute-0 python3.9[205947]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 27 18:57:31 compute-0 sudo[205940]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:32 compute-0 sudo[206113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adjueeixsbikrqhrkbjkkanamfpvcqiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540251.8931332-1140-179872906648450/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:32 compute-0 sudo[206113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:32 compute-0 python3.9[206115]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:32 compute-0 systemd[1]: Started libpod-conmon-fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.scope.
Jan 27 18:57:32 compute-0 podman[206116]: 2026-01-27 18:57:32.504506117 +0000 UTC m=+0.068738441 container exec fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:57:32 compute-0 podman[206116]: 2026-01-27 18:57:32.540240389 +0000 UTC m=+0.104472713 container exec_died fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.546 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.546 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.546 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.546 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 18:57:32 compute-0 systemd[1]: libpod-conmon-fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.scope: Deactivated successfully.
Jan 27 18:57:32 compute-0 sudo[206113]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.706 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.707 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5873MB free_disk=72.43367004394531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.707 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.708 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.777 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.778 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.803 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.819 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.820 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 18:57:32 compute-0 nova_compute[185480]: 2026-01-27 18:57:32.821 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:57:33 compute-0 sudo[206297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjsmyyxhibieihcrypqaonmkhzzmdlit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540252.742108-1148-157412679534595/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:33 compute-0 sudo[206297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:33 compute-0 python3.9[206299]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:33 compute-0 systemd[1]: Started libpod-conmon-fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.scope.
Jan 27 18:57:33 compute-0 podman[206300]: 2026-01-27 18:57:33.333477838 +0000 UTC m=+0.105386257 container exec fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 18:57:33 compute-0 podman[206300]: 2026-01-27 18:57:33.370720636 +0000 UTC m=+0.142628996 container exec_died fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 18:57:33 compute-0 systemd[1]: libpod-conmon-fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.scope: Deactivated successfully.
Jan 27 18:57:33 compute-0 sudo[206297]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:33 compute-0 sudo[206482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laqkqvuevyjprnftacilftoyzscdcdlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540253.6322699-1156-99421892272617/AnsiballZ_file.py'
Jan 27 18:57:33 compute-0 sudo[206482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:34 compute-0 python3.9[206484]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:34 compute-0 sudo[206482]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:34 compute-0 sudo[206644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khwdfgjxeblyguzxxgcprobvzoymkllh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540254.4289012-1165-67920024031488/AnsiballZ_podman_container_info.py'
Jan 27 18:57:34 compute-0 sudo[206644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:34 compute-0 podman[206608]: 2026-01-27 18:57:34.75673183 +0000 UTC m=+0.078497216 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 18:57:34 compute-0 podman[206653]: 2026-01-27 18:57:34.845900214 +0000 UTC m=+0.090149107 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 27 18:57:34 compute-0 python3.9[206656]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 27 18:57:35 compute-0 sudo[206644]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:35 compute-0 podman[206743]: 2026-01-27 18:57:35.291801123 +0000 UTC m=+0.070003512 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 18:57:35 compute-0 sudo[206869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tupzivfugpkoguinulryznkwhyzawzgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540255.2086282-1173-91759417018375/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:35 compute-0 sudo[206869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:35 compute-0 python3.9[206871]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:35 compute-0 systemd[1]: Started libpod-conmon-a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.scope.
Jan 27 18:57:35 compute-0 podman[206872]: 2026-01-27 18:57:35.829867989 +0000 UTC m=+0.091388859 container exec a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0)
Jan 27 18:57:35 compute-0 podman[206872]: 2026-01-27 18:57:35.861946714 +0000 UTC m=+0.123467514 container exec_died a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0)
Jan 27 18:57:35 compute-0 systemd[1]: libpod-conmon-a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.scope: Deactivated successfully.
Jan 27 18:57:35 compute-0 sudo[206869]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:36 compute-0 sudo[207051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bobjtclhlmibglgzzyrjasvgavdttihl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540256.0902429-1181-99936756383643/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:36 compute-0 sudo[207051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:36 compute-0 python3.9[207053]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:36 compute-0 systemd[1]: Started libpod-conmon-a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.scope.
Jan 27 18:57:36 compute-0 podman[207054]: 2026-01-27 18:57:36.685959524 +0000 UTC m=+0.069346165 container exec a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 27 18:57:36 compute-0 podman[207054]: 2026-01-27 18:57:36.722322702 +0000 UTC m=+0.105709323 container exec_died a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:57:36 compute-0 systemd[1]: libpod-conmon-a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.scope: Deactivated successfully.
Jan 27 18:57:36 compute-0 sudo[207051]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:37 compute-0 sudo[207234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjqaoqfdtjdtpbytpvyjkdvdpvamhpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540257.0644999-1189-206030934566177/AnsiballZ_file.py'
Jan 27 18:57:37 compute-0 sudo[207234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:37 compute-0 python3.9[207236]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:37 compute-0 sudo[207234]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:38 compute-0 sudo[207386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boscnocxgfiifpvschspqiuwnpbmxjse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540257.822757-1198-145311226988882/AnsiballZ_podman_container_info.py'
Jan 27 18:57:38 compute-0 sudo[207386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:38 compute-0 python3.9[207388]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 27 18:57:38 compute-0 sudo[207386]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:38 compute-0 sudo[207550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vitqvpzaubvxwngrlntnvxrgtnbxamdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540258.5779746-1206-96868581955509/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:38 compute-0 sudo[207550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:39 compute-0 python3.9[207552]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:39 compute-0 systemd[1]: Started libpod-conmon-87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.scope.
Jan 27 18:57:39 compute-0 podman[207553]: 2026-01-27 18:57:39.241794132 +0000 UTC m=+0.085679231 container exec 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 18:57:39 compute-0 podman[207553]: 2026-01-27 18:57:39.27734468 +0000 UTC m=+0.121229769 container exec_died 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 18:57:39 compute-0 systemd[1]: libpod-conmon-87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.scope: Deactivated successfully.
Jan 27 18:57:39 compute-0 sudo[207550]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:39 compute-0 sudo[207735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxclsudborrspbyvckwfvnbesylajci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540259.5066063-1214-178058908348505/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:39 compute-0 sudo[207735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:39 compute-0 python3.9[207737]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:40 compute-0 systemd[1]: Started libpod-conmon-87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.scope.
Jan 27 18:57:40 compute-0 podman[207738]: 2026-01-27 18:57:40.054736835 +0000 UTC m=+0.074695395 container exec 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 18:57:40 compute-0 podman[207738]: 2026-01-27 18:57:40.090361686 +0000 UTC m=+0.110320276 container exec_died 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 18:57:40 compute-0 systemd[1]: libpod-conmon-87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.scope: Deactivated successfully.
Jan 27 18:57:40 compute-0 sudo[207735]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:40 compute-0 sudo[207920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnfompwwuoijvabeccwryqkzeeqqsfxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540260.3243158-1222-75729447812717/AnsiballZ_file.py'
Jan 27 18:57:40 compute-0 sudo[207920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:40 compute-0 python3.9[207922]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:40 compute-0 sudo[207920]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:41 compute-0 sudo[208072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqttbfgzndiirzniznlrnallwawobeqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540261.0779083-1231-137289386713241/AnsiballZ_podman_container_info.py'
Jan 27 18:57:41 compute-0 sudo[208072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:41 compute-0 python3.9[208074]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 27 18:57:41 compute-0 sudo[208072]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:42 compute-0 sudo[208238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djzspyeddzwmapecxjkttttmsahperxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540261.9427505-1239-143260761644937/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:42 compute-0 sudo[208238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:42 compute-0 python3.9[208240]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:42 compute-0 systemd[1]: Started libpod-conmon-2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.scope.
Jan 27 18:57:42 compute-0 podman[208241]: 2026-01-27 18:57:42.577308349 +0000 UTC m=+0.075498555 container exec 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 18:57:42 compute-0 podman[208241]: 2026-01-27 18:57:42.612350685 +0000 UTC m=+0.110540881 container exec_died 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 18:57:42 compute-0 systemd[1]: libpod-conmon-2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.scope: Deactivated successfully.
Jan 27 18:57:42 compute-0 sudo[208238]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:43 compute-0 sudo[208423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nymelgnwxcdyqtqlestuziuwlzgulvhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540262.8214607-1247-50665204413736/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:43 compute-0 sudo[208423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:43 compute-0 python3.9[208425]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:43 compute-0 systemd[1]: Started libpod-conmon-2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.scope.
Jan 27 18:57:43 compute-0 podman[208426]: 2026-01-27 18:57:43.389686678 +0000 UTC m=+0.068636198 container exec 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 18:57:43 compute-0 podman[208426]: 2026-01-27 18:57:43.419621661 +0000 UTC m=+0.098571181 container exec_died 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 18:57:43 compute-0 systemd[1]: libpod-conmon-2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.scope: Deactivated successfully.
Jan 27 18:57:43 compute-0 sudo[208423]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:43 compute-0 sudo[208607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzvknznsdveevsoxnzgfkvhzlyjtkglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540263.619293-1255-6142218457038/AnsiballZ_file.py'
Jan 27 18:57:43 compute-0 sudo[208607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:44 compute-0 python3.9[208609]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:44 compute-0 sudo[208607]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:44 compute-0 sudo[208759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgzgcfergcefhioaqaspjrwszkxswxdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540264.3254106-1264-240935413904290/AnsiballZ_podman_container_info.py'
Jan 27 18:57:44 compute-0 sudo[208759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:44 compute-0 python3.9[208761]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 27 18:57:44 compute-0 sudo[208759]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:45 compute-0 sudo[208924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uytnbluewkiyeesyegczhfupvswqagkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540265.035244-1272-112839225300419/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:45 compute-0 sudo[208924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:45 compute-0 python3.9[208926]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:45 compute-0 systemd[1]: Started libpod-conmon-2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.scope.
Jan 27 18:57:45 compute-0 podman[208927]: 2026-01-27 18:57:45.656912115 +0000 UTC m=+0.084575123 container exec 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7)
Jan 27 18:57:45 compute-0 podman[208927]: 2026-01-27 18:57:45.687816851 +0000 UTC m=+0.115479819 container exec_died 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 18:57:45 compute-0 systemd[1]: libpod-conmon-2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.scope: Deactivated successfully.
Jan 27 18:57:45 compute-0 sudo[208924]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:46 compute-0 sudo[209109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swnmvrmbiohkoxebrlgplkyfnimpbese ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540265.9024763-1280-213172594304539/AnsiballZ_podman_container_exec.py'
Jan 27 18:57:46 compute-0 sudo[209109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:46 compute-0 python3.9[209111]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:57:46 compute-0 systemd[1]: Started libpod-conmon-2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.scope.
Jan 27 18:57:46 compute-0 podman[209112]: 2026-01-27 18:57:46.539317416 +0000 UTC m=+0.072316777 container exec 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter)
Jan 27 18:57:46 compute-0 podman[209112]: 2026-01-27 18:57:46.570342846 +0000 UTC m=+0.103342197 container exec_died 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git)
Jan 27 18:57:46 compute-0 systemd[1]: libpod-conmon-2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.scope: Deactivated successfully.
Jan 27 18:57:46 compute-0 sudo[209109]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:47 compute-0 sudo[209294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciczhpoilayjykxiatkuyabnrappzuyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540266.7899654-1288-132631542572233/AnsiballZ_file.py'
Jan 27 18:57:47 compute-0 sudo[209294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:47 compute-0 python3.9[209296]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:47 compute-0 sudo[209294]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:47 compute-0 sudo[209459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxltmoireusmuvwxhakaytpjehiqbyxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540267.511775-1297-71537159613507/AnsiballZ_file.py'
Jan 27 18:57:47 compute-0 sudo[209459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:47 compute-0 podman[209420]: 2026-01-27 18:57:47.841732632 +0000 UTC m=+0.073614150 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 18:57:48 compute-0 python3.9[209467]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:48 compute-0 sudo[209459]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:48 compute-0 sudo[209622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oagoazjjueptmyhwiwabuvbvrbbgbsfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540268.2543538-1305-22490144086599/AnsiballZ_stat.py'
Jan 27 18:57:48 compute-0 sudo[209622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:48 compute-0 python3.9[209624]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:48 compute-0 sudo[209622]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:49 compute-0 sudo[209745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bimxbmlocskgcobhfanrmwusvzubfhpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540268.2543538-1305-22490144086599/AnsiballZ_copy.py'
Jan 27 18:57:49 compute-0 sudo[209745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:49 compute-0 python3.9[209747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540268.2543538-1305-22490144086599/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:49 compute-0 sudo[209745]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:49 compute-0 sudo[209897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsjucjcjozromhydvogszktgipkjayls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540269.5788746-1321-43825463570617/AnsiballZ_file.py'
Jan 27 18:57:49 compute-0 sudo[209897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:50 compute-0 python3.9[209899]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:50 compute-0 sudo[209897]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:51 compute-0 sudo[210049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hokmmsjzjosbqlzkemeyjbbogankgxle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540270.87528-1329-176347273836079/AnsiballZ_stat.py'
Jan 27 18:57:51 compute-0 sudo[210049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:51 compute-0 python3.9[210051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:51 compute-0 sudo[210049]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:51 compute-0 sudo[210127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yudoadpzxduyrfiighxbhfbnnheppxme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540270.87528-1329-176347273836079/AnsiballZ_file.py'
Jan 27 18:57:51 compute-0 sudo[210127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:51 compute-0 python3.9[210129]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:51 compute-0 sudo[210127]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:52 compute-0 sudo[210279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcfdyhwjxntildcdoieydluysxybvyou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540272.0503385-1341-260103279352084/AnsiballZ_stat.py'
Jan 27 18:57:52 compute-0 sudo[210279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:52 compute-0 python3.9[210281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:52 compute-0 sudo[210279]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:52 compute-0 sudo[210357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpcharqrumogrrgpetuekleggwfojfyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540272.0503385-1341-260103279352084/AnsiballZ_file.py'
Jan 27 18:57:52 compute-0 sudo[210357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:53 compute-0 python3.9[210359]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.rq7zaig4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:53 compute-0 sudo[210357]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:53 compute-0 sudo[210509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaxxxyhmzuazmbokqcnrfvpzleeaanyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540273.2107904-1353-49691906924230/AnsiballZ_stat.py'
Jan 27 18:57:53 compute-0 sudo[210509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:53 compute-0 python3.9[210511]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:53 compute-0 sudo[210509]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:54 compute-0 sudo[210587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secejpgitjeivcpcfcakfnncazcfxubi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540273.2107904-1353-49691906924230/AnsiballZ_file.py'
Jan 27 18:57:54 compute-0 sudo[210587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:54 compute-0 podman[210589]: 2026-01-27 18:57:54.184291414 +0000 UTC m=+0.062325531 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 18:57:54 compute-0 python3.9[210590]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:54 compute-0 sudo[210587]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:54 compute-0 sudo[210761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tckeculgkxvbybsrwycuxvmmchdkwdgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540274.6447089-1366-243367544758259/AnsiballZ_command.py'
Jan 27 18:57:54 compute-0 sudo[210761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:55 compute-0 python3.9[210763]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:57:55 compute-0 sudo[210761]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:55 compute-0 sudo[210914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxgtqoovjqfhoxvyivpijvspflafouyj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540275.3700287-1374-23621431122855/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 18:57:55 compute-0 sudo[210914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:56 compute-0 python3[210916]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 18:57:56 compute-0 sudo[210914]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:56 compute-0 sudo[211066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhbfzpezpodkkiiizxljtxtjfljvnook ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540276.2718637-1382-191459153770971/AnsiballZ_stat.py'
Jan 27 18:57:56 compute-0 sudo[211066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:56 compute-0 python3.9[211068]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:56 compute-0 sudo[211066]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:57 compute-0 sudo[211144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjjvhrqgpnwgnnqcfqzcaugqplxqphzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540276.2718637-1382-191459153770971/AnsiballZ_file.py'
Jan 27 18:57:57 compute-0 sudo[211144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:57 compute-0 python3.9[211146]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:57 compute-0 sudo[211144]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:57 compute-0 sudo[211296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vovbsfszmbicprlmywfpyaazmuzmpbsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540277.4210758-1394-30580418592704/AnsiballZ_stat.py'
Jan 27 18:57:57 compute-0 sudo[211296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:57 compute-0 python3.9[211298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:57 compute-0 sudo[211296]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:58 compute-0 sudo[211374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laopmmkkjnksvkhiucycrjxctpxajsyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540277.4210758-1394-30580418592704/AnsiballZ_file.py'
Jan 27 18:57:58 compute-0 sudo[211374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:58 compute-0 python3.9[211376]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:58 compute-0 sudo[211374]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:59 compute-0 sudo[211526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydhgcpmtrokskkuukzvgidykectykqqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540278.8637784-1406-172668577623661/AnsiballZ_stat.py'
Jan 27 18:57:59 compute-0 sudo[211526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:59 compute-0 python3.9[211528]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:57:59 compute-0 sudo[211526]: pam_unix(sudo:session): session closed for user root
Jan 27 18:57:59 compute-0 sudo[211604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbjbqbhkzelhisagycoyiuogayiremkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540278.8637784-1406-172668577623661/AnsiballZ_file.py'
Jan 27 18:57:59 compute-0 sudo[211604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:57:59 compute-0 python3.9[211606]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:57:59 compute-0 sudo[211604]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:00 compute-0 sudo[211756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibebqnudhlkcvfpphssltkveouialayy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540280.0046124-1418-222342790402864/AnsiballZ_stat.py'
Jan 27 18:58:00 compute-0 sudo[211756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:00 compute-0 python3.9[211758]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:00 compute-0 sudo[211756]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:00 compute-0 sudo[211834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cifnnmfrbmfplgghpzfvzhextpdlwfez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540280.0046124-1418-222342790402864/AnsiballZ_file.py'
Jan 27 18:58:00 compute-0 sudo[211834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:00 compute-0 python3.9[211836]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:00 compute-0 sudo[211834]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:01 compute-0 sudo[211996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgqvfquuscbvqyxdsqolhtlfkoobnstl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540281.1453571-1430-210461042377547/AnsiballZ_stat.py'
Jan 27 18:58:01 compute-0 sudo[211996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:01 compute-0 podman[211960]: 2026-01-27 18:58:01.555307693 +0000 UTC m=+0.086124651 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 18:58:01 compute-0 python3.9[212004]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:01 compute-0 sudo[211996]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:02 compute-0 sudo[212131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amcgrkjxwpwvlywcnyozumpvgcqjtiaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540281.1453571-1430-210461042377547/AnsiballZ_copy.py'
Jan 27 18:58:02 compute-0 sudo[212131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:02 compute-0 python3.9[212133]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769540281.1453571-1430-210461042377547/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:02 compute-0 sudo[212131]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:02 compute-0 sudo[212283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaspaqchaqkeswkumetvhuxfycfmanfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540282.4642317-1445-65730334425666/AnsiballZ_file.py'
Jan 27 18:58:02 compute-0 sudo[212283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:02 compute-0 python3.9[212285]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:02 compute-0 sudo[212283]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:03 compute-0 sudo[212435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqdvebedmretnjeagbjoqjlkuayqyhcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540283.1038194-1453-236666274749230/AnsiballZ_command.py'
Jan 27 18:58:03 compute-0 sudo[212435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:03 compute-0 python3.9[212437]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:58:03 compute-0 sudo[212435]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:04 compute-0 sudo[212590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hewgqbejdtgykbissnmktwbnrrejwrsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540283.7774148-1461-199400634918284/AnsiballZ_blockinfile.py'
Jan 27 18:58:04 compute-0 sudo[212590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:04 compute-0 python3.9[212592]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:04 compute-0 sudo[212590]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:05 compute-0 sudo[212770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuuknnyelpqgxquaswqkfvuxlgqyfomt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540284.723133-1470-135003545875814/AnsiballZ_command.py'
Jan 27 18:58:05 compute-0 sudo[212770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:05 compute-0 podman[212717]: 2026-01-27 18:58:05.138739509 +0000 UTC m=+0.057438072 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 18:58:05 compute-0 podman[212716]: 2026-01-27 18:58:05.166214109 +0000 UTC m=+0.094474924 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:58:05 compute-0 python3.9[212780]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:58:05 compute-0 sudo[212770]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:05 compute-0 sudo[212951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdwqxiynirlsapxzjsbcixhzigsktucs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540285.5194266-1478-233953625693288/AnsiballZ_stat.py'
Jan 27 18:58:05 compute-0 sudo[212951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:05 compute-0 podman[212911]: 2026-01-27 18:58:05.842619154 +0000 UTC m=+0.077974413 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 18:58:06 compute-0 python3.9[212962]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:58:06 compute-0 sudo[212951]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:06 compute-0 sudo[213114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbbsunjtfegzkywnfwqoiuhfegrelwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540286.191168-1486-8962972679272/AnsiballZ_command.py'
Jan 27 18:58:06 compute-0 sudo[213114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:06 compute-0 python3.9[213116]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:58:06 compute-0 sudo[213114]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:06 compute-0 podman[201378]: time="2026-01-27T18:58:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 18:58:06 compute-0 podman[201378]: @ - - [27/Jan/2026:18:58:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21256 "" "Go-http-client/1.1"
Jan 27 18:58:06 compute-0 podman[201378]: @ - - [27/Jan/2026:18:58:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2981 "" "Go-http-client/1.1"
Jan 27 18:58:07 compute-0 sudo[213269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqjcvychjzykailufsdwbsckruqnweju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540286.9181874-1494-134252692433663/AnsiballZ_file.py'
Jan 27 18:58:07 compute-0 sudo[213269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:07 compute-0 python3.9[213271]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:07 compute-0 sudo[213269]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:07 compute-0 sshd-session[185850]: Connection closed by 192.168.122.31 port 55912
Jan 27 18:58:07 compute-0 sshd-session[185847]: pam_unix(sshd:session): session closed for user zuul
Jan 27 18:58:07 compute-0 systemd-logind[795]: Session 25 logged out. Waiting for processes to exit.
Jan 27 18:58:07 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 27 18:58:07 compute-0 systemd[1]: session-25.scope: Consumed 1min 53.861s CPU time.
Jan 27 18:58:07 compute-0 systemd-logind[795]: Removed session 25.
Jan 27 18:58:08 compute-0 openstack_network_exporter[204477]: ERROR   18:58:08 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 18:58:08 compute-0 openstack_network_exporter[204477]: 
Jan 27 18:58:08 compute-0 openstack_network_exporter[204477]: ERROR   18:58:08 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 18:58:08 compute-0 openstack_network_exporter[204477]: 
Jan 27 18:58:13 compute-0 sshd-session[213302]: Accepted publickey for zuul from 192.168.122.31 port 53934 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 18:58:13 compute-0 systemd-logind[795]: New session 26 of user zuul.
Jan 27 18:58:13 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 27 18:58:13 compute-0 sshd-session[213302]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 18:58:14 compute-0 sudo[213457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbbduqobmxgcpxoqlqgigvdjehwwdsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540293.6611724-19-280528624421376/AnsiballZ_systemd_service.py'
Jan 27 18:58:14 compute-0 sudo[213457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:14 compute-0 python3.9[213459]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:58:14 compute-0 systemd[1]: Reloading.
Jan 27 18:58:14 compute-0 systemd-rc-local-generator[213486]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:58:14 compute-0 systemd-sysv-generator[213493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:58:14 compute-0 sshd-session[213423]: Received disconnect from 45.148.10.141 port 60804:11:  [preauth]
Jan 27 18:58:14 compute-0 sshd-session[213423]: Disconnected from authenticating user root 45.148.10.141 port 60804 [preauth]
Jan 27 18:58:14 compute-0 sudo[213457]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:15 compute-0 python3.9[213644]: ansible-ansible.builtin.service_facts Invoked
Jan 27 18:58:15 compute-0 network[213661]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 18:58:15 compute-0 network[213662]: 'network-scripts' will be removed from distribution in near future.
Jan 27 18:58:15 compute-0 network[213663]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 18:58:18 compute-0 podman[213729]: 2026-01-27 18:58:18.334187333 +0000 UTC m=+0.101066125 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 18:58:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:58:20.497 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:58:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:58:20.498 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:58:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:58:20.499 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:58:22 compute-0 sudo[213959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xddiuehhsuacjypbziuulylvoctbqfzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540301.3549063-42-168263362102340/AnsiballZ_systemd_service.py'
Jan 27 18:58:22 compute-0 sudo[213959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:22 compute-0 python3.9[213961]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:58:23 compute-0 sudo[213959]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:24 compute-0 sudo[214123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnubbyxygyyqbrluiqvobsxrkrobouue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540303.9458697-52-16142284466361/AnsiballZ_file.py'
Jan 27 18:58:24 compute-0 sudo[214123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:24 compute-0 podman[214087]: 2026-01-27 18:58:24.459475124 +0000 UTC m=+0.073557075 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Jan 27 18:58:24 compute-0 python3.9[214132]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:24 compute-0 sudo[214123]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:25 compute-0 sudo[214286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqczbvilluvbafcjrxffribsktffggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540304.8443441-60-88326255911510/AnsiballZ_file.py'
Jan 27 18:58:25 compute-0 sudo[214286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:25 compute-0 python3.9[214288]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:25 compute-0 sudo[214286]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:26 compute-0 sudo[214438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqqilruocoeswinacrnazvkyczogggu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540305.6026487-69-211399552628725/AnsiballZ_command.py'
Jan 27 18:58:26 compute-0 sudo[214438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:26 compute-0 python3.9[214440]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:58:26 compute-0 sudo[214438]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:27 compute-0 python3.9[214592]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 18:58:27 compute-0 sudo[214742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnxypefkxjctirzhxejvnbrkhenvcdkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540307.4113991-87-46598168755164/AnsiballZ_systemd_service.py'
Jan 27 18:58:27 compute-0 sudo[214742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:28 compute-0 python3.9[214744]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:58:28 compute-0 systemd[1]: Reloading.
Jan 27 18:58:28 compute-0 systemd-rc-local-generator[214769]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:58:28 compute-0 systemd-sysv-generator[214774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:58:28 compute-0 sudo[214742]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:28 compute-0 sudo[214931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjopwrxhigbwwkbmcbxhmmpcisuxlupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540308.6214674-95-99203014364972/AnsiballZ_command.py'
Jan 27 18:58:28 compute-0 sudo[214931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:29 compute-0 sshd-session[214780]: Invalid user sol from 45.148.10.240 port 51186
Jan 27 18:58:29 compute-0 python3.9[214933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 18:58:29 compute-0 sudo[214931]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:29 compute-0 sshd-session[214780]: Connection closed by invalid user sol 45.148.10.240 port 51186 [preauth]
Jan 27 18:58:29 compute-0 sudo[215084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmxpanfeziympnmqiicqeurvuqfqwzcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540309.4360192-104-76766719680545/AnsiballZ_file.py'
Jan 27 18:58:29 compute-0 sudo[215084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:29 compute-0 podman[201378]: time="2026-01-27T18:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 18:58:29 compute-0 podman[201378]: @ - - [27/Jan/2026:18:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21256 "" "Go-http-client/1.1"
Jan 27 18:58:29 compute-0 podman[201378]: @ - - [27/Jan/2026:18:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2998 "" "Go-http-client/1.1"
Jan 27 18:58:29 compute-0 python3.9[215086]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:58:30 compute-0 sudo[215084]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:30 compute-0 python3.9[215238]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:58:30 compute-0 nova_compute[185480]: 2026-01-27 18:58:30.821 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:30 compute-0 nova_compute[185480]: 2026-01-27 18:58:30.823 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 18:58:31 compute-0 openstack_network_exporter[204477]: ERROR   18:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 18:58:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 18:58:31 compute-0 openstack_network_exporter[204477]: ERROR   18:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 18:58:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 18:58:31 compute-0 python3.9[215390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:31 compute-0 nova_compute[185480]: 2026-01-27 18:58:31.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:31 compute-0 nova_compute[185480]: 2026-01-27 18:58:31.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:32 compute-0 podman[215485]: 2026-01-27 18:58:32.040099224 +0000 UTC m=+0.066816040 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.091 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.092 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d2571910>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 18:58:32.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 18:58:32 compute-0 python3.9[215524]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540310.9963188-120-40604088830776/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.511 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.513 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.514 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.514 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.556 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.556 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.557 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.611 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.612 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.612 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.612 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.790 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.792 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5896MB free_disk=72.48066329956055GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.792 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.793 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:58:32 compute-0 python3.9[215682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.905 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.906 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.932 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.947 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.950 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 18:58:32 compute-0 nova_compute[185480]: 2026-01-27 18:58:32.950 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:58:33 compute-0 python3.9[215803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540312.4214187-135-2147578119807/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:58:33 compute-0 nova_compute[185480]: 2026-01-27 18:58:33.909 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:33 compute-0 nova_compute[185480]: 2026-01-27 18:58:33.909 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:58:34 compute-0 sudo[215953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtldbwzvxpiyesvjhmzowobhrrqpbewr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540313.7403488-153-206730854672592/AnsiballZ_getent.py'
Jan 27 18:58:34 compute-0 sudo[215953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:34 compute-0 python3.9[215955]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 27 18:58:34 compute-0 sudo[215953]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:35 compute-0 podman[216034]: 2026-01-27 18:58:35.301734482 +0000 UTC m=+0.065342194 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 18:58:35 compute-0 podman[216021]: 2026-01-27 18:58:35.356830236 +0000 UTC m=+0.115454997 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:58:35 compute-0 python3.9[216152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:36 compute-0 podman[216247]: 2026-01-27 18:58:36.000586394 +0000 UTC m=+0.052225214 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 18:58:36 compute-0 python3.9[216289]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769540315.1631978-181-266091616436611/.source.conf _original_basename=ceilometer.conf follow=False checksum=f817847bb0474d7c55a7ad9afdea5f1400a30720 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:36 compute-0 python3.9[216448]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:37 compute-0 python3.9[216569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769540316.341155-181-188396297913271/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:37 compute-0 python3.9[216719]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:38 compute-0 python3.9[216840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769540317.4950066-181-232444785650973/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:39 compute-0 python3.9[216990]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:58:39 compute-0 python3.9[217142]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:58:40 compute-0 python3.9[217294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:40 compute-0 python3.9[217415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540319.9463649-240-55946418860101/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:41 compute-0 sudo[217565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxqtnosqmmankvvhdxlnnjsynlhslrgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540321.1380026-255-32297731168721/AnsiballZ_file.py'
Jan 27 18:58:41 compute-0 sudo[217565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:41 compute-0 python3.9[217567]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:41 compute-0 sudo[217565]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:42 compute-0 sudo[217717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uodxasdcnvekkblhdohlydepqvfjyxlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540321.8074956-263-257472220109809/AnsiballZ_file.py'
Jan 27 18:58:42 compute-0 sudo[217717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:42 compute-0 python3.9[217719]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:42 compute-0 sudo[217717]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:42 compute-0 sudo[217869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrglywtsxaiyftfzwbcbqszoarpsqksj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540322.482938-271-256354721498458/AnsiballZ_file.py'
Jan 27 18:58:42 compute-0 sudo[217869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:42 compute-0 python3.9[217871]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:58:42 compute-0 sudo[217869]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:43 compute-0 sudo[218021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuzqnnrhjhvhbkexhnbfrvsqchmshdoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540323.1497838-279-208840377168586/AnsiballZ_stat.py'
Jan 27 18:58:43 compute-0 sudo[218021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:43 compute-0 python3.9[218023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:43 compute-0 sudo[218021]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:43 compute-0 sudo[218144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-memzezbcywtgutgeuxsqnwzpfxrzrrvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540323.1497838-279-208840377168586/AnsiballZ_copy.py'
Jan 27 18:58:43 compute-0 sudo[218144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:44 compute-0 python3.9[218146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540323.1497838-279-208840377168586/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:58:44 compute-0 sudo[218144]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:44 compute-0 sudo[218220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skjrwknoqjqmtdmjjyxpvtsfsfegfoye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540323.1497838-279-208840377168586/AnsiballZ_stat.py'
Jan 27 18:58:44 compute-0 sudo[218220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:44 compute-0 python3.9[218222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:44 compute-0 sudo[218220]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:45 compute-0 sudo[218343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfojuzgbsvdllkkytfjoygpguwpossdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540323.1497838-279-208840377168586/AnsiballZ_copy.py'
Jan 27 18:58:45 compute-0 sudo[218343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:45 compute-0 python3.9[218345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540323.1497838-279-208840377168586/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:58:45 compute-0 sudo[218343]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:45 compute-0 sudo[218495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcdkjhkcsrzznxvfxxuadngmwyfjeojn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540325.474281-279-253144266769241/AnsiballZ_stat.py'
Jan 27 18:58:45 compute-0 sudo[218495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:45 compute-0 python3.9[218497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:45 compute-0 sudo[218495]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:46 compute-0 sudo[218618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufmxyoracdqvvokztiycosbdnvtmtsxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540325.474281-279-253144266769241/AnsiballZ_copy.py'
Jan 27 18:58:46 compute-0 sudo[218618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:46 compute-0 python3.9[218620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769540325.474281-279-253144266769241/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:58:46 compute-0 sudo[218618]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:47 compute-0 sudo[218770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpgddwltfesthldqzehmszxnxnjbtxjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540326.8642983-321-73051344873381/AnsiballZ_file.py'
Jan 27 18:58:47 compute-0 sudo[218770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:47 compute-0 python3.9[218772]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:47 compute-0 sudo[218770]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:47 compute-0 sudo[218922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvsjrubiwxfoiyjfwnejvcoxkmhbvcxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540327.5752225-329-232376245493278/AnsiballZ_file.py'
Jan 27 18:58:47 compute-0 sudo[218922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:48 compute-0 python3.9[218924]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:58:48 compute-0 sudo[218922]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:48 compute-0 sudo[219088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vugrrsqyskisvkjjizqnynxxvrikpacx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540328.2858183-337-92562429606878/AnsiballZ_stat.py'
Jan 27 18:58:48 compute-0 sudo[219088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:48 compute-0 podman[219048]: 2026-01-27 18:58:48.612262612 +0000 UTC m=+0.073533815 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 18:58:48 compute-0 python3.9[219094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:58:48 compute-0 sudo[219088]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:49 compute-0 sudo[219222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqfcvtovgrozxsfvfhbnxijwvgtatnqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540328.2858183-337-92562429606878/AnsiballZ_copy.py'
Jan 27 18:58:49 compute-0 sudo[219222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:49 compute-0 python3.9[219224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540328.2858183-337-92562429606878/.source.json _original_basename=.xd3hzvrw follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:49 compute-0 sudo[219222]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:50 compute-0 python3.9[219374]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:52 compute-0 sudo[219795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzyxkwdpgmmyyzwmnjsyvvoqhhnzrazd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540331.8653421-377-277279402428114/AnsiballZ_container_config_data.py'
Jan 27 18:58:52 compute-0 sudo[219795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:52 compute-0 python3.9[219797]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 27 18:58:52 compute-0 sudo[219795]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:53 compute-0 sudo[219947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-befuuiqarybmabfobxxphmrpyonhwbwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540332.9971304-388-270893842453901/AnsiballZ_container_config_hash.py'
Jan 27 18:58:53 compute-0 sudo[219947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:53 compute-0 python3.9[219949]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:58:53 compute-0 sudo[219947]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:54 compute-0 podman[220073]: 2026-01-27 18:58:54.631774314 +0000 UTC m=+0.064138916 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=openstack_network_exporter)
Jan 27 18:58:54 compute-0 sudo[220115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjlmridsqgyeykxmazvhyvncxmzpodbu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540334.0592704-398-202247965715174/AnsiballZ_edpm_container_manage.py'
Jan 27 18:58:54 compute-0 sudo[220115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:54 compute-0 python3[220121]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:58:55 compute-0 podman[220158]: 2026-01-27 18:58:55.126254686 +0000 UTC m=+0.048701341 container create a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 18:58:55 compute-0 podman[220158]: 2026-01-27 18:58:55.101880816 +0000 UTC m=+0.024327491 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 27 18:58:55 compute-0 python3[220121]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 27 18:58:55 compute-0 sudo[220115]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:55 compute-0 sudo[220346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obfqxpukfelhijxwrfrlypoaurdosgoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540335.4618852-406-12788894176639/AnsiballZ_stat.py'
Jan 27 18:58:55 compute-0 sudo[220346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:55 compute-0 python3.9[220348]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:58:55 compute-0 sudo[220346]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:56 compute-0 sudo[220500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhfxifkskadwhwxcpshvfngimsbfiasl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540336.2070785-415-142325245679469/AnsiballZ_file.py'
Jan 27 18:58:56 compute-0 sudo[220500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:56 compute-0 python3.9[220502]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:56 compute-0 sudo[220500]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:56 compute-0 sudo[220576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgwbpxyiblekdlhkolctmdbxujijkybg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540336.2070785-415-142325245679469/AnsiballZ_stat.py'
Jan 27 18:58:56 compute-0 sudo[220576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:57 compute-0 python3.9[220578]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:58:57 compute-0 sudo[220576]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:57 compute-0 sudo[220727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwpzcqoqylecucipaxlgxridqydznttb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540337.213111-415-237688941366997/AnsiballZ_copy.py'
Jan 27 18:58:57 compute-0 sudo[220727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:58 compute-0 python3.9[220729]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769540337.213111-415-237688941366997/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:58:58 compute-0 sudo[220727]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:58 compute-0 sudo[220803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnhkeglrqsglrlmreobxstlsuuvlfvot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540337.213111-415-237688941366997/AnsiballZ_systemd.py'
Jan 27 18:58:58 compute-0 sudo[220803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:59 compute-0 python3.9[220805]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:58:59 compute-0 systemd[1]: Reloading.
Jan 27 18:58:59 compute-0 systemd-sysv-generator[220835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:58:59 compute-0 systemd-rc-local-generator[220831]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:58:59 compute-0 sudo[220803]: pam_unix(sudo:session): session closed for user root
Jan 27 18:58:59 compute-0 sudo[220913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgpjwwznizgkolvxgmyrrbsivodubzac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540337.213111-415-237688941366997/AnsiballZ_systemd.py'
Jan 27 18:58:59 compute-0 sudo[220913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:58:59 compute-0 podman[201378]: time="2026-01-27T18:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 18:58:59 compute-0 podman[201378]: @ - - [27/Jan/2026:18:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 24305 "" "Go-http-client/1.1"
Jan 27 18:58:59 compute-0 podman[201378]: @ - - [27/Jan/2026:18:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3005 "" "Go-http-client/1.1"
Jan 27 18:59:00 compute-0 python3.9[220915]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:59:00 compute-0 systemd[1]: Reloading.
Jan 27 18:59:00 compute-0 systemd-sysv-generator[220943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:59:00 compute-0 systemd-rc-local-generator[220939]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:59:00 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 27 18:59:00 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 18:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 18:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 18:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 18:59:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.
Jan 27 18:59:00 compute-0 podman[220955]: 2026-01-27 18:59:00.590840201 +0000 UTC m=+0.123691344 container init a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + sudo -E kolla_set_configs
Jan 27 18:59:00 compute-0 podman[220955]: 2026-01-27 18:59:00.620080117 +0000 UTC m=+0.152931250 container start a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 18:59:00 compute-0 sudo[220977]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 18:59:00 compute-0 sudo[220977]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 18:59:00 compute-0 podman[220955]: ceilometer_agent_ipmi
Jan 27 18:59:00 compute-0 sudo[220977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 18:59:00 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 27 18:59:00 compute-0 sudo[220913]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Validating config file
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Copying service configuration files
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: INFO:__main__:Writing out command to execute
Jan 27 18:59:00 compute-0 sudo[220977]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:00 compute-0 podman[220978]: 2026-01-27 18:59:00.685597726 +0000 UTC m=+0.054809725 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: ++ cat /run_command
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + ARGS=
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + sudo kolla_copy_cacerts
Jan 27 18:59:00 compute-0 systemd[1]: a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d-326b79437331d7a2.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:59:00 compute-0 systemd[1]: a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d-326b79437331d7a2.service: Failed with result 'exit-code'.
Jan 27 18:59:00 compute-0 sudo[221000]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 18:59:00 compute-0 sudo[221000]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 18:59:00 compute-0 sudo[221000]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 18:59:00 compute-0 sudo[221000]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + [[ ! -n '' ]]
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + . kolla_extend_start
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + umask 0022
Jan 27 18:59:00 compute-0 ceilometer_agent_ipmi[220971]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 27 18:59:01 compute-0 openstack_network_exporter[204477]: ERROR   18:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 18:59:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 18:59:01 compute-0 openstack_network_exporter[204477]: ERROR   18:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 18:59:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 18:59:01 compute-0 python3.9[221152]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.647 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.648 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.648 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.648 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.648 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.648 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.648 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.648 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.649 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.650 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.650 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.650 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.650 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.650 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.650 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.650 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.650 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.651 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.652 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.653 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.654 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.655 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.656 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.656 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.656 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.656 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.656 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.656 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.656 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.656 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.657 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.658 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.659 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.659 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.659 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.659 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.659 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.659 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.659 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.659 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.660 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.661 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.662 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.663 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.664 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.665 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.666 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.666 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.684 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.687 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.689 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 18:59:01 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:01.808 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpybxtd2s3/privsep.sock']
Jan 27 18:59:01 compute-0 sudo[221182]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpybxtd2s3/privsep.sock
Jan 27 18:59:01 compute-0 sudo[221182]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 18:59:01 compute-0 sudo[221182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 18:59:02 compute-0 podman[221217]: 2026-01-27 18:59:02.299316145 +0000 UTC m=+0.070721073 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 18:59:02 compute-0 sudo[221331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdmabeonkqgypbvmmxnauvjogwvbjmck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540342.1659749-460-59244759711172/AnsiballZ_stat.py'
Jan 27 18:59:02 compute-0 sudo[221331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:02 compute-0 sudo[221182]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.487 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.488 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpybxtd2s3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.359 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.364 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.367 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.367 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.626 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.626 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.629 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.631 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.632 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.633 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.633 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.633 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.633 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.633 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.633 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.633 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.633 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.634 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.635 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.636 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.637 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.638 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.639 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.640 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.641 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.642 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.643 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.644 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.645 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.646 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.647 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.648 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.649 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.650 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.650 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.650 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.650 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.650 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 27 18:59:02 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:02.652 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 27 18:59:02 compute-0 python3.9[221333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:59:02 compute-0 sudo[221331]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:03 compute-0 sudo[221460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpbvzbfggtjuzjyumrovifwakljrmvlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540342.1659749-460-59244759711172/AnsiballZ_copy.py'
Jan 27 18:59:03 compute-0 sudo[221460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:03 compute-0 python3.9[221462]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540342.1659749-460-59244759711172/.source.yaml _original_basename=.6xibvjfw follow=False checksum=7260f9c28c04f74c3d5a4639f5aab2ef3ca4311d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:03 compute-0 sudo[221460]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:03 compute-0 sudo[221612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyjcdzwfmwbcocgdyopqnlcpfvsnqmrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540343.5996017-477-61986578071537/AnsiballZ_file.py'
Jan 27 18:59:03 compute-0 sudo[221612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:04 compute-0 python3.9[221614]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:04 compute-0 sudo[221612]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:04 compute-0 sudo[221764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eykuoiyjxukmtmelfosvycxtndaorxcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540344.3283145-485-151617659535895/AnsiballZ_file.py'
Jan 27 18:59:04 compute-0 sudo[221764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:04 compute-0 python3.9[221766]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 18:59:04 compute-0 sudo[221764]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:05 compute-0 podman[221891]: 2026-01-27 18:59:05.506887693 +0000 UTC m=+0.071028441 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 18:59:05 compute-0 podman[221890]: 2026-01-27 18:59:05.550599533 +0000 UTC m=+0.109584909 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 18:59:05 compute-0 python3.9[221942]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:06 compute-0 podman[222083]: 2026-01-27 18:59:06.226035736 +0000 UTC m=+0.061698129 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 18:59:07 compute-0 sudo[222404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyelrfihycxgisgxtwprtqooerlrrpic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540347.3336473-519-271493445508964/AnsiballZ_container_config_data.py'
Jan 27 18:59:07 compute-0 sudo[222404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:07 compute-0 python3.9[222406]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 27 18:59:07 compute-0 sudo[222404]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:08 compute-0 sudo[222556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oycutpauwjxszaofjgdatomlzgctdyrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540348.183652-530-19301358898035/AnsiballZ_container_config_hash.py'
Jan 27 18:59:08 compute-0 sudo[222556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:08 compute-0 python3.9[222558]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 18:59:08 compute-0 sudo[222556]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:09 compute-0 sudo[222708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxlohzpufxytwxtwycmzunzrfibejqvs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540348.9517846-540-7640720824277/AnsiballZ_edpm_container_manage.py'
Jan 27 18:59:09 compute-0 sudo[222708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:09 compute-0 python3[222710]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 18:59:09 compute-0 podman[222748]: 2026-01-27 18:59:09.817081409 +0000 UTC m=+0.048186438 container create b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0)
Jan 27 18:59:09 compute-0 podman[222748]: 2026-01-27 18:59:09.793655552 +0000 UTC m=+0.024760601 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 27 18:59:09 compute-0 python3[222710]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 27 18:59:09 compute-0 sudo[222708]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:10 compute-0 sudo[222936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lygiceqwdsfgyfwtlapaqwokbpsswtcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540350.1038625-548-16779063273859/AnsiballZ_stat.py'
Jan 27 18:59:10 compute-0 sudo[222936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:10 compute-0 python3.9[222938]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:59:10 compute-0 sudo[222936]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:11 compute-0 sudo[223090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpfmtmdezsjyztinaeozgngmmvfutaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540350.8244717-557-73559673806446/AnsiballZ_file.py'
Jan 27 18:59:11 compute-0 sudo[223090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:11 compute-0 python3.9[223092]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:11 compute-0 sudo[223090]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:11 compute-0 sudo[223166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfforuxknixridyjghyzklogukqosvol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540350.8244717-557-73559673806446/AnsiballZ_stat.py'
Jan 27 18:59:11 compute-0 sudo[223166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:11 compute-0 python3.9[223168]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 18:59:11 compute-0 sudo[223166]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:12 compute-0 sudo[223317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azftirsuyekkquipihfoalaakvzpbrha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540351.8274455-557-204381636082414/AnsiballZ_copy.py'
Jan 27 18:59:12 compute-0 sudo[223317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:12 compute-0 python3.9[223319]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769540351.8274455-557-204381636082414/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:12 compute-0 sudo[223317]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:12 compute-0 sudo[223393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssbkgrebmxpxvhzclildcbauktetuyzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540351.8274455-557-204381636082414/AnsiballZ_systemd.py'
Jan 27 18:59:12 compute-0 sudo[223393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:13 compute-0 python3.9[223395]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 18:59:13 compute-0 systemd[1]: Reloading.
Jan 27 18:59:13 compute-0 systemd-rc-local-generator[223424]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:59:13 compute-0 systemd-sysv-generator[223428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:59:13 compute-0 sudo[223393]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:13 compute-0 sudo[223505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrtlgngyrlvlkaohlumpbnnqkdsbtph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540351.8274455-557-204381636082414/AnsiballZ_systemd.py'
Jan 27 18:59:13 compute-0 sudo[223505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:14 compute-0 python3.9[223507]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 18:59:14 compute-0 systemd[1]: Reloading.
Jan 27 18:59:14 compute-0 systemd-rc-local-generator[223536]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 18:59:14 compute-0 systemd-sysv-generator[223541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 18:59:14 compute-0 systemd[1]: Starting kepler container...
Jan 27 18:59:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:59:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f.
Jan 27 18:59:14 compute-0 podman[223547]: 2026-01-27 18:59:14.765548363 +0000 UTC m=+0.120976740 container init b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.29.0, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Jan 27 18:59:14 compute-0 kepler[223562]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 18:59:14 compute-0 kepler[223562]: I0127 18:59:14.790487       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 27 18:59:14 compute-0 kepler[223562]: I0127 18:59:14.790657       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 27 18:59:14 compute-0 kepler[223562]: I0127 18:59:14.790694       1 config.go:295] kernel version: 5.14
Jan 27 18:59:14 compute-0 kepler[223562]: I0127 18:59:14.791272       1 power.go:78] Unable to obtain power, use estimate method
Jan 27 18:59:14 compute-0 kepler[223562]: I0127 18:59:14.791297       1 redfish.go:169] failed to get redfish credential file path
Jan 27 18:59:14 compute-0 kepler[223562]: I0127 18:59:14.791594       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 27 18:59:14 compute-0 kepler[223562]: I0127 18:59:14.791605       1 power.go:79] using none to obtain power
Jan 27 18:59:14 compute-0 kepler[223562]: E0127 18:59:14.791618       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 27 18:59:14 compute-0 kepler[223562]: E0127 18:59:14.791640       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 27 18:59:14 compute-0 kepler[223562]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 18:59:14 compute-0 kepler[223562]: I0127 18:59:14.793537       1 exporter.go:84] Number of CPUs: 8
Jan 27 18:59:14 compute-0 podman[223547]: 2026-01-27 18:59:14.800530435 +0000 UTC m=+0.155958802 container start b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_id=kepler, container_name=kepler, vcs-type=git, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, distribution-scope=public, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 18:59:14 compute-0 podman[223547]: kepler
Jan 27 18:59:14 compute-0 systemd[1]: Started kepler container.
Jan 27 18:59:14 compute-0 sudo[223505]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:14 compute-0 podman[223572]: 2026-01-27 18:59:14.89658733 +0000 UTC m=+0.085990516 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, managed_by=edpm_ansible)
Jan 27 18:59:14 compute-0 systemd[1]: b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f-6b15be65cdc49025.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:59:14 compute-0 systemd[1]: b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f-6b15be65cdc49025.service: Failed with result 'exit-code'.
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.398014       1 watcher.go:83] Using in cluster k8s config
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.398070       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 27 18:59:15 compute-0 kepler[223562]: E0127 18:59:15.398166       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.402490       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.402524       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.406640       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.406695       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.414112       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.414150       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.414165       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421469       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421499       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421504       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421508       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421513       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421524       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421612       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421634       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421651       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421849       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.421943       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 27 18:59:15 compute-0 kepler[223562]: I0127 18:59:15.422264       1 exporter.go:208] Started Kepler in 631.978349ms
Jan 27 18:59:15 compute-0 python3.9[223756]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 18:59:16 compute-0 sudo[223906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoigtqrataacihntiqhweblpidtxqmmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540356.0703063-602-194492037180575/AnsiballZ_stat.py'
Jan 27 18:59:16 compute-0 sudo[223906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:16 compute-0 python3.9[223908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:59:16 compute-0 sudo[223906]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:17 compute-0 sudo[224031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgamortyivctwnxivshpwrwwaksnngr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540356.0703063-602-194492037180575/AnsiballZ_copy.py'
Jan 27 18:59:17 compute-0 sudo[224031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:17 compute-0 python3.9[224033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540356.0703063-602-194492037180575/.source.yaml _original_basename=.tkhnpw4h follow=False checksum=8a6866421c6bf12ef988542112fc71b4d1570380 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:17 compute-0 sudo[224031]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:18 compute-0 sudo[224183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kievipztnhadzbuhfldqdbbzchhtjqun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540357.6243682-617-275456284727926/AnsiballZ_systemd.py'
Jan 27 18:59:18 compute-0 sudo[224183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:18 compute-0 python3.9[224185]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:59:18 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 27 18:59:18 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:18.469 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 27 18:59:18 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:18.571 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 27 18:59:18 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:18.571 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 27 18:59:18 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:18.572 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 27 18:59:18 compute-0 ceilometer_agent_ipmi[220971]: 2026-01-27 18:59:18.583 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 27 18:59:18 compute-0 systemd[1]: libpod-a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.scope: Deactivated successfully.
Jan 27 18:59:18 compute-0 systemd[1]: libpod-a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.scope: Consumed 2.318s CPU time.
Jan 27 18:59:18 compute-0 conmon[220971]: conmon a7bdf1a5968d03f9ef24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.scope/container/memory.events
Jan 27 18:59:18 compute-0 podman[224189]: 2026-01-27 18:59:18.747138379 +0000 UTC m=+0.363152073 container died a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 27 18:59:18 compute-0 systemd[1]: a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d-326b79437331d7a2.timer: Deactivated successfully.
Jan 27 18:59:18 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.
Jan 27 18:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d-userdata-shm.mount: Deactivated successfully.
Jan 27 18:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4-merged.mount: Deactivated successfully.
Jan 27 18:59:18 compute-0 podman[224204]: 2026-01-27 18:59:18.867612586 +0000 UTC m=+0.086139091 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 18:59:18 compute-0 podman[224189]: 2026-01-27 18:59:18.939528257 +0000 UTC m=+0.555541981 container cleanup a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi)
Jan 27 18:59:18 compute-0 podman[224189]: ceilometer_agent_ipmi
Jan 27 18:59:19 compute-0 podman[224240]: ceilometer_agent_ipmi
Jan 27 18:59:19 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 27 18:59:19 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 27 18:59:19 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 27 18:59:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 18:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 18:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 18:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4a08b8b4044723748f8bb715fa4b1fa07a5966a8222b0b87dbe28503cd48b4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 18:59:19 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.
Jan 27 18:59:19 compute-0 podman[224253]: 2026-01-27 18:59:19.245262962 +0000 UTC m=+0.194366246 container init a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + sudo -E kolla_set_configs
Jan 27 18:59:19 compute-0 podman[224253]: 2026-01-27 18:59:19.280137922 +0000 UTC m=+0.229241196 container start a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:59:19 compute-0 sudo[224274]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 18:59:19 compute-0 sudo[224274]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 18:59:19 compute-0 sudo[224274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 18:59:19 compute-0 podman[224253]: ceilometer_agent_ipmi
Jan 27 18:59:19 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 27 18:59:19 compute-0 sudo[224183]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Validating config file
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Copying service configuration files
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: INFO:__main__:Writing out command to execute
Jan 27 18:59:19 compute-0 sudo[224274]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: ++ cat /run_command
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + ARGS=
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + sudo kolla_copy_cacerts
Jan 27 18:59:19 compute-0 podman[224275]: 2026-01-27 18:59:19.393018058 +0000 UTC m=+0.095039663 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 27 18:59:19 compute-0 sudo[224292]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 18:59:19 compute-0 sudo[224292]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 18:59:19 compute-0 sudo[224292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 18:59:19 compute-0 systemd[1]: a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d-2044240aed6a513c.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:59:19 compute-0 systemd[1]: a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d-2044240aed6a513c.service: Failed with result 'exit-code'.
Jan 27 18:59:19 compute-0 sudo[224292]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + [[ ! -n '' ]]
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + . kolla_extend_start
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + umask 0022
Jan 27 18:59:19 compute-0 ceilometer_agent_ipmi[224268]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 27 18:59:20 compute-0 sudo[224449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khbejrrfnojvndpyzpvvixopnelgdjtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540359.687204-625-238740256173131/AnsiballZ_systemd.py'
Jan 27 18:59:20 compute-0 sudo[224449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.291 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.291 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.291 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.291 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.291 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.291 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.291 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.292 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.293 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.294 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.295 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.296 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.297 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.298 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.298 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.298 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.298 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.298 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.298 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.298 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.299 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.300 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.301 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.302 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.303 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.304 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.305 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.306 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.307 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.307 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.307 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.307 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.307 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.328 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.330 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.330 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 18:59:20 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.342 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpxw7qe8w6/privsep.sock']
Jan 27 18:59:20 compute-0 sudo[224456]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpxw7qe8w6/privsep.sock
Jan 27 18:59:20 compute-0 sudo[224456]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 18:59:20 compute-0 sudo[224456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 18:59:20 compute-0 python3.9[224451]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 18:59:20 compute-0 systemd[1]: Stopping kepler container...
Jan 27 18:59:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:59:20.497 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:59:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:59:20.498 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:59:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 18:59:20.499 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:59:20 compute-0 kepler[223562]: I0127 18:59:20.526312       1 exporter.go:218] Received shutdown signal
Jan 27 18:59:20 compute-0 kepler[223562]: I0127 18:59:20.526893       1 exporter.go:226] Exiting...
Jan 27 18:59:20 compute-0 systemd[1]: libpod-b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f.scope: Deactivated successfully.
Jan 27 18:59:20 compute-0 podman[224462]: 2026-01-27 18:59:20.705833188 +0000 UTC m=+0.240387171 container died b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, architecture=x86_64, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Jan 27 18:59:20 compute-0 systemd[1]: b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f-6b15be65cdc49025.timer: Deactivated successfully.
Jan 27 18:59:20 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f.
Jan 27 18:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f-userdata-shm.mount: Deactivated successfully.
Jan 27 18:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f356916d8fc69d901fdc6839759c6f0a8fefaee388a80f58cff539c29949dd4a-merged.mount: Deactivated successfully.
Jan 27 18:59:20 compute-0 podman[224462]: 2026-01-27 18:59:20.770768093 +0000 UTC m=+0.305322076 container cleanup b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9)
Jan 27 18:59:20 compute-0 podman[224462]: kepler
Jan 27 18:59:20 compute-0 podman[224491]: kepler
Jan 27 18:59:20 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 27 18:59:20 compute-0 systemd[1]: Stopped kepler container.
Jan 27 18:59:20 compute-0 systemd[1]: Starting kepler container...
Jan 27 18:59:20 compute-0 systemd[1]: Started libcrun container.
Jan 27 18:59:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f.
Jan 27 18:59:21 compute-0 podman[224504]: 2026-01-27 18:59:21.037353406 +0000 UTC m=+0.138208209 container init b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, release-0.7.12=, vcs-type=git, architecture=x86_64, version=9.4, config_id=kepler, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 18:59:21 compute-0 kepler[224521]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.071606       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.071795       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.071833       1 config.go:295] kernel version: 5.14
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.072625       1 power.go:78] Unable to obtain power, use estimate method
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.072648       1 redfish.go:169] failed to get redfish credential file path
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.073031       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.073044       1 power.go:79] using none to obtain power
Jan 27 18:59:21 compute-0 kepler[224521]: E0127 18:59:21.073059       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 27 18:59:21 compute-0 kepler[224521]: E0127 18:59:21.073090       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 27 18:59:21 compute-0 kepler[224521]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.075038       1 exporter.go:84] Number of CPUs: 8
Jan 27 18:59:21 compute-0 podman[224504]: 2026-01-27 18:59:21.080814981 +0000 UTC m=+0.181669844 container start b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public)
Jan 27 18:59:21 compute-0 podman[224504]: kepler
Jan 27 18:59:21 compute-0 systemd[1]: Started kepler container.
Jan 27 18:59:21 compute-0 sudo[224456]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.112 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.113 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpxw7qe8w6/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.946 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.950 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.952 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:20.953 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 27 18:59:21 compute-0 sudo[224449]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:21 compute-0 podman[224531]: 2026-01-27 18:59:21.192037458 +0000 UTC m=+0.096314603 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 18:59:21 compute-0 systemd[1]: b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f-5270e53eeb0d002e.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:59:21 compute-0 systemd[1]: b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f-5270e53eeb0d002e.service: Failed with result 'exit-code'.
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.249 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.249 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.252 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.252 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.252 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.252 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.252 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.252 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.253 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.257 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.257 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.258 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.258 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.258 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.258 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.258 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.258 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.258 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.259 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.259 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.259 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.259 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.260 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.260 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.260 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.260 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.260 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.261 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.261 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.261 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.261 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.261 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.261 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.261 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.262 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.262 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.262 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.262 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.262 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.262 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.263 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.263 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.263 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.263 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.263 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.263 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.263 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.263 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.264 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.264 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.264 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.264 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.264 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.264 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.265 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.265 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.265 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.265 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.265 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.265 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.266 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.266 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.266 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.266 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.266 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.266 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.267 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.267 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.267 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.267 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.267 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.267 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.268 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.268 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.270 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.270 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.270 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.271 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.271 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.271 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.271 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.271 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.272 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.272 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.272 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.272 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.272 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.272 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.272 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.272 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.273 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.273 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.273 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.273 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.273 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.273 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.273 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.274 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.274 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.274 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.274 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.274 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.274 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.274 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.274 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.275 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.275 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.275 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.275 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.275 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.275 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.275 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.276 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.276 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.276 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.276 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.276 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.276 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.276 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.277 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.277 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.277 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.277 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.277 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.277 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.277 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.278 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.278 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.278 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.278 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.278 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.278 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.278 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.278 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.279 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.279 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.279 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.279 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.279 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.279 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.279 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.280 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.280 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.280 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.280 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.280 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.280 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.280 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.281 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.281 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.281 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.281 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.281 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.281 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.281 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.282 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.282 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.282 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.282 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.282 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.282 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.282 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.282 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.283 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.283 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.283 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.283 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.283 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.283 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.283 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.284 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.285 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.285 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.285 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.285 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.285 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.285 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.285 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.286 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.286 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.286 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.286 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.286 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.286 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.286 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.286 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.287 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.287 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.287 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.287 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.287 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.287 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.287 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.288 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.288 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.288 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.288 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.288 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.288 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.288 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.288 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.289 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.289 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.289 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 27 18:59:21 compute-0 ceilometer_agent_ipmi[224268]: 2026-01-27 18:59:21.293 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 27 18:59:21 compute-0 sudo[224708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgfutuscnccrxodukpgpkoywfjnjxddi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540361.3134024-633-105065242923675/AnsiballZ_find.py'
Jan 27 18:59:21 compute-0 sudo[224708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.667384       1 watcher.go:83] Using in cluster k8s config
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.667428       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 27 18:59:21 compute-0 kepler[224521]: E0127 18:59:21.668136       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.676018       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.676061       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.680472       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.680511       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.691622       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.691697       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.691716       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.702571       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.702610       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.702616       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.702621       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.702627       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.702641       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.703874       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.703926       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.703945       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.703965       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.704528       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 27 18:59:21 compute-0 kepler[224521]: I0127 18:59:21.705069       1 exporter.go:208] Started Kepler in 633.665188ms
Jan 27 18:59:21 compute-0 python3.9[224710]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 18:59:21 compute-0 sudo[224708]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:22 compute-0 sudo[224870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srlqrbyjneqhkkcemnvynskdyryywxhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540362.2547455-643-98090455891072/AnsiballZ_podman_container_info.py'
Jan 27 18:59:22 compute-0 sudo[224870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:23 compute-0 python3.9[224872]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 27 18:59:23 compute-0 sudo[224870]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:24 compute-0 sudo[225034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxukhuoyuqxhaorywrikmdjcgvebtre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540363.4513977-651-192063573867037/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:24 compute-0 sudo[225034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:24 compute-0 python3.9[225036]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:24 compute-0 systemd[1]: Started libpod-conmon-94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.scope.
Jan 27 18:59:24 compute-0 podman[225037]: 2026-01-27 18:59:24.401828907 +0000 UTC m=+0.101599408 container exec 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:59:24 compute-0 podman[225057]: 2026-01-27 18:59:24.480309975 +0000 UTC m=+0.059255371 container exec_died 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 18:59:24 compute-0 podman[225037]: 2026-01-27 18:59:24.487570649 +0000 UTC m=+0.187341160 container exec_died 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:59:24 compute-0 systemd[1]: libpod-conmon-94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.scope: Deactivated successfully.
Jan 27 18:59:24 compute-0 sudo[225034]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:25 compute-0 sudo[225229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ermkbovjoxpwniocsmrjnaddalupntpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540364.7898993-659-201355178120878/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:25 compute-0 sudo[225229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:25 compute-0 podman[225191]: 2026-01-27 18:59:25.290248029 +0000 UTC m=+0.100816030 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Jan 27 18:59:25 compute-0 python3.9[225237]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:25 compute-0 systemd[1]: Started libpod-conmon-94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.scope.
Jan 27 18:59:25 compute-0 podman[225238]: 2026-01-27 18:59:25.615022157 +0000 UTC m=+0.103896613 container exec 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 18:59:25 compute-0 podman[225238]: 2026-01-27 18:59:25.65003929 +0000 UTC m=+0.138913726 container exec_died 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:59:25 compute-0 systemd[1]: libpod-conmon-94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b.scope: Deactivated successfully.
Jan 27 18:59:25 compute-0 sudo[225229]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:26 compute-0 sudo[225417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzowucvkrqyarcrelpdswchxarszrmjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540365.9276226-667-7881524063997/AnsiballZ_file.py'
Jan 27 18:59:26 compute-0 sudo[225417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:26 compute-0 python3.9[225419]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:26 compute-0 sudo[225417]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:27 compute-0 sudo[225569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvpvtndlqftjfgdhjjeadogszazwbsgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540366.8469272-676-273064227240897/AnsiballZ_podman_container_info.py'
Jan 27 18:59:27 compute-0 sudo[225569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:27 compute-0 python3.9[225571]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 27 18:59:27 compute-0 sudo[225569]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:28 compute-0 sudo[225735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgwhhzizcgpionyekyrodfepdvjujejt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540367.8688262-684-198154523153575/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:28 compute-0 sudo[225735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:28 compute-0 python3.9[225737]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:28 compute-0 systemd[1]: Started libpod-conmon-fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.scope.
Jan 27 18:59:28 compute-0 podman[225738]: 2026-01-27 18:59:28.695775197 +0000 UTC m=+0.121415941 container exec fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 18:59:28 compute-0 podman[225738]: 2026-01-27 18:59:28.730260408 +0000 UTC m=+0.155901152 container exec_died fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 18:59:28 compute-0 sudo[225735]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:28 compute-0 systemd[1]: libpod-conmon-fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.scope: Deactivated successfully.
Jan 27 18:59:29 compute-0 sudo[225917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcrtdsqhsdkdumkynsorktxuksbjinvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540369.0919468-692-128681359225764/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:29 compute-0 sudo[225917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:29 compute-0 python3.9[225919]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:29 compute-0 podman[201378]: time="2026-01-27T18:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 18:59:29 compute-0 podman[201378]: @ - - [27/Jan/2026:18:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27276 "" "Go-http-client/1.1"
Jan 27 18:59:29 compute-0 podman[201378]: @ - - [27/Jan/2026:18:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3842 "" "Go-http-client/1.1"
Jan 27 18:59:29 compute-0 systemd[1]: Started libpod-conmon-fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.scope.
Jan 27 18:59:29 compute-0 podman[225920]: 2026-01-27 18:59:29.911939077 +0000 UTC m=+0.139375558 container exec fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 18:59:29 compute-0 podman[225920]: 2026-01-27 18:59:29.946846258 +0000 UTC m=+0.174282769 container exec_died fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 18:59:30 compute-0 sudo[225917]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:30 compute-0 systemd[1]: libpod-conmon-fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943.scope: Deactivated successfully.
Jan 27 18:59:30 compute-0 sudo[226097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khkuykntkwbqesaapveridxlwhpooxnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540370.2568629-700-37631059769261/AnsiballZ_file.py'
Jan 27 18:59:30 compute-0 sudo[226097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:30 compute-0 python3.9[226099]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:30 compute-0 sudo[226097]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:31 compute-0 openstack_network_exporter[204477]: ERROR   18:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 18:59:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 18:59:31 compute-0 openstack_network_exporter[204477]: ERROR   18:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 18:59:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 18:59:31 compute-0 sudo[226249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxblyvtrfbjniygjonszdmuuthmaymkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540371.2182584-709-130688786409232/AnsiballZ_podman_container_info.py'
Jan 27 18:59:31 compute-0 sudo[226249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:31 compute-0 python3.9[226251]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 27 18:59:31 compute-0 sudo[226249]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 18:59:32 compute-0 sudo[226423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wacelrhluijwdrkjzhigfvjteitaeioz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540372.1337256-717-203957509405540/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.543 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.544 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.544 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:32 compute-0 sudo[226423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.544 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.544 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:32 compute-0 podman[226386]: 2026-01-27 18:59:32.589249247 +0000 UTC m=+0.119914715 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126)
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.597 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.597 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.598 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.598 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 18:59:32 compute-0 python3.9[226431]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:32 compute-0 systemd[1]: Started libpod-conmon-a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.scope.
Jan 27 18:59:32 compute-0 podman[226434]: 2026-01-27 18:59:32.955284626 +0000 UTC m=+0.128654832 container exec a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.960 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.962 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5723MB free_disk=72.4783821105957GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.962 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 18:59:32 compute-0 nova_compute[185480]: 2026-01-27 18:59:32.962 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 18:59:32 compute-0 podman[226434]: 2026-01-27 18:59:32.988548558 +0000 UTC m=+0.161918774 container exec_died a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 18:59:33 compute-0 systemd[1]: libpod-conmon-a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.scope: Deactivated successfully.
Jan 27 18:59:33 compute-0 sudo[226423]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:33 compute-0 nova_compute[185480]: 2026-01-27 18:59:33.094 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 18:59:33 compute-0 nova_compute[185480]: 2026-01-27 18:59:33.095 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 18:59:33 compute-0 nova_compute[185480]: 2026-01-27 18:59:33.121 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 18:59:33 compute-0 nova_compute[185480]: 2026-01-27 18:59:33.133 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 18:59:33 compute-0 nova_compute[185480]: 2026-01-27 18:59:33.135 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 18:59:33 compute-0 nova_compute[185480]: 2026-01-27 18:59:33.135 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 18:59:33 compute-0 sudo[226613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkxsfpctlfghrrapnihoetvveenbqwyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540373.2535925-725-79768046100680/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:33 compute-0 sudo[226613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:33 compute-0 python3.9[226615]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:33 compute-0 systemd[1]: Started libpod-conmon-a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.scope.
Jan 27 18:59:33 compute-0 podman[226616]: 2026-01-27 18:59:33.94760589 +0000 UTC m=+0.101629400 container exec a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 18:59:33 compute-0 podman[226616]: 2026-01-27 18:59:33.982372477 +0000 UTC m=+0.136395927 container exec_died a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 27 18:59:34 compute-0 systemd[1]: libpod-conmon-a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4.scope: Deactivated successfully.
Jan 27 18:59:34 compute-0 sudo[226613]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:34 compute-0 nova_compute[185480]: 2026-01-27 18:59:34.106 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:34 compute-0 nova_compute[185480]: 2026-01-27 18:59:34.106 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:34 compute-0 nova_compute[185480]: 2026-01-27 18:59:34.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:34 compute-0 nova_compute[185480]: 2026-01-27 18:59:34.526 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:34 compute-0 nova_compute[185480]: 2026-01-27 18:59:34.526 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:34 compute-0 nova_compute[185480]: 2026-01-27 18:59:34.526 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 18:59:34 compute-0 sudo[226795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpmawglldycyfbuyvslbyuqaadbouylb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540374.248671-733-48852165371557/AnsiballZ_file.py'
Jan 27 18:59:34 compute-0 sudo[226795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:34 compute-0 python3.9[226797]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:34 compute-0 sudo[226795]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:35 compute-0 sudo[226947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azgahwydqypjihisoitilnqrfmdrnwot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540375.1447773-742-222462325930536/AnsiballZ_podman_container_info.py'
Jan 27 18:59:35 compute-0 sudo[226947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:35 compute-0 python3.9[226949]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 27 18:59:35 compute-0 sudo[226947]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:36 compute-0 podman[227060]: 2026-01-27 18:59:36.324049669 +0000 UTC m=+0.092610084 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 18:59:36 compute-0 podman[227056]: 2026-01-27 18:59:36.402517826 +0000 UTC m=+0.176271465 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 18:59:36 compute-0 sudo[227176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zotnloyreqfzhentknphjuxsorbefqtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540376.050202-750-30987527504674/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:36 compute-0 sudo[227176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:36 compute-0 podman[227122]: 2026-01-27 18:59:36.456093272 +0000 UTC m=+0.099052358 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 18:59:36 compute-0 python3.9[227179]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:36 compute-0 systemd[1]: Started libpod-conmon-87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.scope.
Jan 27 18:59:36 compute-0 podman[227181]: 2026-01-27 18:59:36.81703102 +0000 UTC m=+0.140024913 container exec 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 18:59:36 compute-0 podman[227181]: 2026-01-27 18:59:36.853368695 +0000 UTC m=+0.176362598 container exec_died 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 18:59:36 compute-0 sudo[227176]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:36 compute-0 systemd[1]: libpod-conmon-87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.scope: Deactivated successfully.
Jan 27 18:59:37 compute-0 sudo[227360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzlkkqwxoafrfazxkcfpazkdrlfduxxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540377.1524887-758-106422152572727/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:37 compute-0 sudo[227360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:37 compute-0 python3.9[227362]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:37 compute-0 systemd[1]: Started libpod-conmon-87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.scope.
Jan 27 18:59:37 compute-0 podman[227363]: 2026-01-27 18:59:37.934262465 +0000 UTC m=+0.102015118 container exec 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 18:59:37 compute-0 podman[227363]: 2026-01-27 18:59:37.96681002 +0000 UTC m=+0.134562663 container exec_died 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 18:59:38 compute-0 systemd[1]: libpod-conmon-87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3.scope: Deactivated successfully.
Jan 27 18:59:38 compute-0 sudo[227360]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:38 compute-0 sudo[227544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzyvcflqxsbhdqeuxrccxvqixuopzumk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540378.245419-766-57558109980834/AnsiballZ_file.py'
Jan 27 18:59:38 compute-0 sudo[227544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:38 compute-0 python3.9[227546]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:38 compute-0 sudo[227544]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:39 compute-0 sudo[227696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwgvgghvcelwatmadhcqdgglbfvgbkty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540379.158353-775-62358667600885/AnsiballZ_podman_container_info.py'
Jan 27 18:59:39 compute-0 sudo[227696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:39 compute-0 python3.9[227698]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 27 18:59:39 compute-0 sudo[227696]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:40 compute-0 sudo[227861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvwsqziaaxlzizbsisglwbiotvmnmgum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540380.1055288-783-272055188919578/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:40 compute-0 sudo[227861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:40 compute-0 python3.9[227863]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:40 compute-0 systemd[1]: Started libpod-conmon-2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.scope.
Jan 27 18:59:40 compute-0 podman[227864]: 2026-01-27 18:59:40.904217136 +0000 UTC m=+0.104717083 container exec 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 18:59:40 compute-0 podman[227864]: 2026-01-27 18:59:40.940495709 +0000 UTC m=+0.140995646 container exec_died 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 18:59:41 compute-0 systemd[1]: libpod-conmon-2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.scope: Deactivated successfully.
Jan 27 18:59:41 compute-0 sudo[227861]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:41 compute-0 sudo[228044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqnajmuksphuigsymcedthpprtrowjwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540381.3983307-791-49387378168866/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:41 compute-0 sudo[228044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:42 compute-0 python3.9[228046]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:42 compute-0 systemd[1]: Started libpod-conmon-2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.scope.
Jan 27 18:59:42 compute-0 podman[228047]: 2026-01-27 18:59:42.16136313 +0000 UTC m=+0.116627946 container exec 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 18:59:42 compute-0 podman[228047]: 2026-01-27 18:59:42.194951769 +0000 UTC m=+0.150216505 container exec_died 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 18:59:42 compute-0 systemd[1]: libpod-conmon-2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39.scope: Deactivated successfully.
Jan 27 18:59:42 compute-0 sudo[228044]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:42 compute-0 sudo[228225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhccptvugsdgpsbzzmppsbmaczkbmgoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540382.5040963-799-265608724868272/AnsiballZ_file.py'
Jan 27 18:59:42 compute-0 sudo[228225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:43 compute-0 python3.9[228227]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:43 compute-0 sudo[228225]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:43 compute-0 sudo[228377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucsqsiadjrgruwbuxifawhykennfhjxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540383.3622923-808-168735794074305/AnsiballZ_podman_container_info.py'
Jan 27 18:59:43 compute-0 sudo[228377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:43 compute-0 python3.9[228379]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 27 18:59:44 compute-0 sudo[228377]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:44 compute-0 sudo[228541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokkexzjdkazsqcqispuglfojtsbsesf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540384.3433974-816-185383833617452/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:44 compute-0 sudo[228541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:44 compute-0 python3.9[228543]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:45 compute-0 systemd[1]: Started libpod-conmon-2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.scope.
Jan 27 18:59:45 compute-0 podman[228544]: 2026-01-27 18:59:45.106959392 +0000 UTC m=+0.156210897 container exec 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Jan 27 18:59:45 compute-0 podman[228562]: 2026-01-27 18:59:45.224373467 +0000 UTC m=+0.099768906 container exec_died 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Jan 27 18:59:45 compute-0 podman[228544]: 2026-01-27 18:59:45.291914803 +0000 UTC m=+0.341166348 container exec_died 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter)
Jan 27 18:59:45 compute-0 systemd[1]: libpod-conmon-2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.scope: Deactivated successfully.
Jan 27 18:59:45 compute-0 sudo[228541]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:46 compute-0 sudo[228721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgqnroqizyuukihmbdmhrafrbitinlhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540385.6391733-824-100537725894915/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:46 compute-0 sudo[228721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:46 compute-0 python3.9[228723]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:46 compute-0 systemd[1]: Started libpod-conmon-2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.scope.
Jan 27 18:59:46 compute-0 podman[228724]: 2026-01-27 18:59:46.383880668 +0000 UTC m=+0.124752120 container exec 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_id=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Jan 27 18:59:46 compute-0 podman[228724]: 2026-01-27 18:59:46.418042941 +0000 UTC m=+0.158914383 container exec_died 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal)
Jan 27 18:59:46 compute-0 systemd[1]: libpod-conmon-2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7.scope: Deactivated successfully.
Jan 27 18:59:46 compute-0 sudo[228721]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:47 compute-0 sudo[228903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecwkhxnlxfkmdggwvexiqqhcgtprxuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540386.68199-832-240215850561474/AnsiballZ_file.py'
Jan 27 18:59:47 compute-0 sudo[228903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:47 compute-0 python3.9[228905]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:47 compute-0 sudo[228903]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:47 compute-0 sudo[229055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdjzmsozcekamrkyyvxdturjlqynpkpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540387.549255-841-68115346604392/AnsiballZ_podman_container_info.py'
Jan 27 18:59:47 compute-0 sudo[229055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:48 compute-0 python3.9[229057]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 27 18:59:48 compute-0 sudo[229055]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:48 compute-0 sudo[229220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfiezdmsoevaihckkpvbhtzxfhtvzfhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540388.475131-849-130873159545869/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:48 compute-0 sudo[229220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:49 compute-0 python3.9[229222]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:49 compute-0 systemd[1]: Started libpod-conmon-a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.scope.
Jan 27 18:59:49 compute-0 podman[229223]: 2026-01-27 18:59:49.217383514 +0000 UTC m=+0.098371191 container exec a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 18:59:49 compute-0 podman[229223]: 2026-01-27 18:59:49.249507099 +0000 UTC m=+0.130494746 container exec_died a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 27 18:59:49 compute-0 systemd[1]: libpod-conmon-a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.scope: Deactivated successfully.
Jan 27 18:59:49 compute-0 sudo[229220]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:49 compute-0 podman[229238]: 2026-01-27 18:59:49.317607259 +0000 UTC m=+0.098450114 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 18:59:49 compute-0 podman[229403]: 2026-01-27 18:59:49.923139879 +0000 UTC m=+0.068319677 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 18:59:49 compute-0 sudo[229445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbcmyycojmnalwptattvbsxkaqhfnkib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540389.5285838-857-263575154947043/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:49 compute-0 sudo[229445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:49 compute-0 systemd[1]: a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d-2044240aed6a513c.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 18:59:49 compute-0 systemd[1]: a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d-2044240aed6a513c.service: Failed with result 'exit-code'.
Jan 27 18:59:50 compute-0 python3.9[229449]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:50 compute-0 systemd[1]: Started libpod-conmon-a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.scope.
Jan 27 18:59:50 compute-0 podman[229450]: 2026-01-27 18:59:50.282181402 +0000 UTC m=+0.100072382 container exec a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 18:59:50 compute-0 podman[229450]: 2026-01-27 18:59:50.316250133 +0000 UTC m=+0.134141073 container exec_died a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 27 18:59:50 compute-0 systemd[1]: libpod-conmon-a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d.scope: Deactivated successfully.
Jan 27 18:59:50 compute-0 sudo[229445]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:50 compute-0 sudo[229629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zijwdszhpddaqxzmdcrihhjmdatckfqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540390.6093857-865-207135766240991/AnsiballZ_file.py'
Jan 27 18:59:50 compute-0 sudo[229629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:51 compute-0 python3.9[229631]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:51 compute-0 sudo[229629]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:51 compute-0 podman[229632]: 2026-01-27 18:59:51.311111996 +0000 UTC m=+0.085936985 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_id=kepler)
Jan 27 18:59:51 compute-0 sudo[229798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddtobrlkuzrlzcumqxdjeiokyclwgyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540391.4654987-874-218507040008224/AnsiballZ_podman_container_info.py'
Jan 27 18:59:51 compute-0 sudo[229798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:52 compute-0 python3.9[229800]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 27 18:59:52 compute-0 sudo[229798]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:52 compute-0 sudo[229962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwiylqqvnegczbvyswgpngjbxpcegtgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540392.398464-882-4165208859344/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:52 compute-0 sudo[229962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:53 compute-0 python3.9[229964]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:53 compute-0 systemd[1]: Started libpod-conmon-b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f.scope.
Jan 27 18:59:53 compute-0 podman[229965]: 2026-01-27 18:59:53.15652869 +0000 UTC m=+0.103376871 container exec b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, maintainer=Red Hat, Inc., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public)
Jan 27 18:59:53 compute-0 podman[229965]: 2026-01-27 18:59:53.19438087 +0000 UTC m=+0.141229041 container exec_died b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=kepler, container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc.)
Jan 27 18:59:53 compute-0 systemd[1]: libpod-conmon-b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f.scope: Deactivated successfully.
Jan 27 18:59:53 compute-0 sudo[229962]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:53 compute-0 sudo[230144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jryjqtubgsmkyugovsurngahvwhpnpxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540393.430808-890-217463230962498/AnsiballZ_podman_container_exec.py'
Jan 27 18:59:53 compute-0 sudo[230144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:54 compute-0 python3.9[230146]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 18:59:54 compute-0 systemd[1]: Started libpod-conmon-b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f.scope.
Jan 27 18:59:54 compute-0 podman[230147]: 2026-01-27 18:59:54.359573578 +0000 UTC m=+0.113189875 container exec b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, vcs-type=git, version=9.4, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.buildah.version=1.29.0, distribution-scope=public)
Jan 27 18:59:54 compute-0 podman[230147]: 2026-01-27 18:59:54.392956192 +0000 UTC m=+0.146572519 container exec_died b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 18:59:54 compute-0 systemd[1]: libpod-conmon-b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f.scope: Deactivated successfully.
Jan 27 18:59:54 compute-0 sudo[230144]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:55 compute-0 sudo[230326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vilhdhyfiaelwvzpjmcctdfliwufzltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540394.651853-898-224784331154804/AnsiballZ_file.py'
Jan 27 18:59:55 compute-0 sudo[230326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:55 compute-0 python3.9[230328]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:55 compute-0 sudo[230326]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:55 compute-0 sudo[230491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pchopqlikghnvqyzqcaqgseqpzzxkaqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540395.5273216-907-274451830882595/AnsiballZ_file.py'
Jan 27 18:59:55 compute-0 sudo[230491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:55 compute-0 podman[230452]: 2026-01-27 18:59:55.934859904 +0000 UTC m=+0.089411169 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 18:59:56 compute-0 python3.9[230499]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:56 compute-0 sudo[230491]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:56 compute-0 sudo[230651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcabthvsfuqbbngabyqofkaucukydiuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540396.3822136-915-188959439695290/AnsiballZ_stat.py'
Jan 27 18:59:56 compute-0 sudo[230651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:57 compute-0 python3.9[230653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:59:57 compute-0 sudo[230651]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:57 compute-0 sudo[230774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rstvwikvfwfjjsvkzeixedceavluxvdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540396.3822136-915-188959439695290/AnsiballZ_copy.py'
Jan 27 18:59:57 compute-0 sudo[230774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:57 compute-0 python3.9[230776]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769540396.3822136-915-188959439695290/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:57 compute-0 sudo[230774]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:58 compute-0 sudo[230926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvngtgtwqbjtyktevclqoqwatuothbej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540398.1603699-931-192019905366903/AnsiballZ_file.py'
Jan 27 18:59:58 compute-0 sudo[230926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:58 compute-0 python3.9[230928]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 18:59:58 compute-0 sudo[230926]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:59 compute-0 sudo[231078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adozdrlqkoksdszxfcwgbekgjxsaobzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540399.031076-939-189956309042825/AnsiballZ_stat.py'
Jan 27 18:59:59 compute-0 sudo[231078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 18:59:59 compute-0 python3.9[231080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 18:59:59 compute-0 podman[201378]: time="2026-01-27T18:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 18:59:59 compute-0 sudo[231078]: pam_unix(sudo:session): session closed for user root
Jan 27 18:59:59 compute-0 podman[201378]: @ - - [27/Jan/2026:18:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 18:59:59 compute-0 podman[201378]: @ - - [27/Jan/2026:18:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Jan 27 19:00:00 compute-0 sudo[231156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zixinwgrboyizyzuugctqhinmqeowyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540399.031076-939-189956309042825/AnsiballZ_file.py'
Jan 27 19:00:00 compute-0 sudo[231156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:00 compute-0 python3.9[231158]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:00 compute-0 sudo[231156]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:00 compute-0 sudo[231308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fohzxrwrwlcslcncgisdkgrbfmgukfou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540400.566293-951-84370317914458/AnsiballZ_stat.py'
Jan 27 19:00:00 compute-0 sudo[231308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:01 compute-0 python3.9[231310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:01 compute-0 sudo[231308]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:01 compute-0 openstack_network_exporter[204477]: ERROR   19:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:00:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:00:01 compute-0 openstack_network_exporter[204477]: ERROR   19:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:00:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:00:01 compute-0 sudo[231386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kofiloiubennxgjpuzoofumhrneqbcwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540400.566293-951-84370317914458/AnsiballZ_file.py'
Jan 27 19:00:01 compute-0 sudo[231386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:01 compute-0 python3.9[231388]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.24m9cywj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:01 compute-0 sudo[231386]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:02 compute-0 sudo[231538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpmcsdyhytasxbgsxkcyxepumkniwyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540402.0299218-963-181144706347936/AnsiballZ_stat.py'
Jan 27 19:00:02 compute-0 sudo[231538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:02 compute-0 python3.9[231540]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:02 compute-0 sudo[231538]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:03 compute-0 sudo[231636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lizibgllphrhlukeksywcdnnzklnndkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540402.0299218-963-181144706347936/AnsiballZ_file.py'
Jan 27 19:00:03 compute-0 sudo[231636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:03 compute-0 podman[231590]: 2026-01-27 19:00:03.019175795 +0000 UTC m=+0.118230942 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:00:03 compute-0 python3.9[231639]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:03 compute-0 sudo[231636]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:03 compute-0 sudo[231789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnjpzjntpmiocvxxujxsudxxtgmyctvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540403.5196574-976-99762202713149/AnsiballZ_command.py'
Jan 27 19:00:03 compute-0 sudo[231789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:04 compute-0 python3.9[231791]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:00:04 compute-0 sudo[231789]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:05 compute-0 sudo[231942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evomawqneidlvfxnwevazmrnydxgzmin ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540404.4803023-984-184226932962047/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 19:00:05 compute-0 sudo[231942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:05 compute-0 python3[231944]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 19:00:05 compute-0 sudo[231942]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:06 compute-0 sudo[232094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chmvrrurplqhgnsojdwtfisuvfqbyfzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540405.6497996-992-114216729621642/AnsiballZ_stat.py'
Jan 27 19:00:06 compute-0 sudo[232094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:06 compute-0 python3.9[232096]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:06 compute-0 sudo[232094]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:06 compute-0 podman[232146]: 2026-01-27 19:00:06.72575151 +0000 UTC m=+0.087416983 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:00:06 compute-0 sudo[232220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kytuijchisfklzkgbtxikckjfkukwnxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540405.6497996-992-114216729621642/AnsiballZ_file.py'
Jan 27 19:00:06 compute-0 sudo[232220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:06 compute-0 podman[232148]: 2026-01-27 19:00:06.765514752 +0000 UTC m=+0.119288418 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 19:00:06 compute-0 podman[232147]: 2026-01-27 19:00:06.788085637 +0000 UTC m=+0.149757450 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 27 19:00:06 compute-0 python3.9[232231]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:06 compute-0 sudo[232220]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:07 compute-0 sudo[232387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapjzhequytjpyddjisoktavwpilnwng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540407.1979935-1004-135749404120745/AnsiballZ_stat.py'
Jan 27 19:00:07 compute-0 sudo[232387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:07 compute-0 python3.9[232389]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:07 compute-0 sudo[232387]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:08 compute-0 sudo[232465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwolgzmqqurmagevxqnqrylkghlsglv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540407.1979935-1004-135749404120745/AnsiballZ_file.py'
Jan 27 19:00:08 compute-0 sudo[232465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:08 compute-0 python3.9[232467]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:08 compute-0 sudo[232465]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:09 compute-0 sudo[232617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmelxfzgeisgbpyotbvpljhmpxwchwlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540408.6465209-1016-256966670734463/AnsiballZ_stat.py'
Jan 27 19:00:09 compute-0 sudo[232617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:09 compute-0 python3.9[232619]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:09 compute-0 sudo[232617]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:09 compute-0 sudo[232695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtyvpykprfmybsoslmsgrgwhxsctufzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540408.6465209-1016-256966670734463/AnsiballZ_file.py'
Jan 27 19:00:09 compute-0 sudo[232695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:09 compute-0 python3.9[232697]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:09 compute-0 sudo[232695]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:10 compute-0 sudo[232847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwearlxauxwkoebxmepbyszxznmhvnfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540410.0953395-1028-31271314222731/AnsiballZ_stat.py'
Jan 27 19:00:10 compute-0 sudo[232847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:10 compute-0 python3.9[232849]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:10 compute-0 sudo[232847]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:11 compute-0 sudo[232925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsgtimabrwkyphmyjcepcftqxecwfeuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540410.0953395-1028-31271314222731/AnsiballZ_file.py'
Jan 27 19:00:11 compute-0 sudo[232925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:11 compute-0 python3.9[232927]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:11 compute-0 sudo[232925]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:12 compute-0 sudo[233077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqboenwmihfveaajeofbsfjklymgizqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540411.6351697-1040-7937704383831/AnsiballZ_stat.py'
Jan 27 19:00:12 compute-0 sudo[233077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:12 compute-0 python3.9[233079]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:12 compute-0 sudo[233077]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:12 compute-0 sudo[233202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iduemlhsvaglmskyurihoawxfcrxldbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540411.6351697-1040-7937704383831/AnsiballZ_copy.py'
Jan 27 19:00:12 compute-0 sudo[233202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:13 compute-0 python3.9[233204]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769540411.6351697-1040-7937704383831/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:13 compute-0 sudo[233202]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:13 compute-0 sudo[233354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzgpixyhsdkcbhicaudnqzgntgfdtalj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540413.2725885-1055-269728138506452/AnsiballZ_file.py'
Jan 27 19:00:13 compute-0 sudo[233354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:13 compute-0 python3.9[233356]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:13 compute-0 sudo[233354]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:14 compute-0 sudo[233506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvvzgxqwpdbahjkrftvxjjqsgbdjxzya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540414.1090233-1063-187090287859587/AnsiballZ_command.py'
Jan 27 19:00:14 compute-0 sudo[233506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:14 compute-0 python3.9[233508]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:00:14 compute-0 sudo[233506]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:15 compute-0 sudo[233661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvfubfcawdhpuoqbxujufywdxhgeiden ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540415.0610995-1071-21196999007754/AnsiballZ_blockinfile.py'
Jan 27 19:00:15 compute-0 sudo[233661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:15 compute-0 python3.9[233663]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:15 compute-0 sudo[233661]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:16 compute-0 sudo[233813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfuawqjgnbuqpfknlfoqrnavtuhhycny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540416.193678-1080-127545400860540/AnsiballZ_command.py'
Jan 27 19:00:16 compute-0 sudo[233813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:16 compute-0 python3.9[233815]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:00:16 compute-0 sudo[233813]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:17 compute-0 sudo[233966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyifxppsjdwnzrsqgyurxhmewjwdoivw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540417.155832-1088-15502871241506/AnsiballZ_stat.py'
Jan 27 19:00:17 compute-0 sudo[233966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:17 compute-0 python3.9[233968]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 19:00:17 compute-0 sudo[233966]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:18 compute-0 sudo[234120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbystrfncrohqaltesgcwcmamqyxrwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540417.956166-1096-209763496038999/AnsiballZ_command.py'
Jan 27 19:00:18 compute-0 sudo[234120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:18 compute-0 python3.9[234122]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:00:18 compute-0 sudo[234120]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:19 compute-0 sudo[234275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmjxmtylbnabtqofoxyzlpwnyaydxpwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540418.8763433-1104-24359967729868/AnsiballZ_file.py'
Jan 27 19:00:19 compute-0 sudo[234275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:19 compute-0 python3.9[234277]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:19 compute-0 sudo[234275]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:19 compute-0 sshd-session[213305]: Connection closed by 192.168.122.31 port 53934
Jan 27 19:00:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:00:20.499 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:00:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:00:20.505 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:00:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:00:20.507 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:00:20 compute-0 sshd-session[213302]: pam_unix(sshd:session): session closed for user zuul
Jan 27 19:00:20 compute-0 systemd-logind[795]: Session 26 logged out. Waiting for processes to exit.
Jan 27 19:00:20 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 27 19:00:20 compute-0 systemd[1]: session-26.scope: Consumed 1min 36.538s CPU time.
Jan 27 19:00:20 compute-0 systemd-logind[795]: Removed session 26.
Jan 27 19:00:20 compute-0 podman[234303]: 2026-01-27 19:00:20.648045964 +0000 UTC m=+0.085589769 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:00:20 compute-0 podman[234304]: 2026-01-27 19:00:20.65927918 +0000 UTC m=+0.092657096 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 19:00:22 compute-0 podman[234341]: 2026-01-27 19:00:22.351207844 +0000 UTC m=+0.109666681 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, container_name=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9)
Jan 27 19:00:25 compute-0 sshd-session[234362]: Accepted publickey for zuul from 192.168.122.31 port 35710 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 19:00:25 compute-0 systemd-logind[795]: New session 27 of user zuul.
Jan 27 19:00:25 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 27 19:00:25 compute-0 sshd-session[234362]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 19:00:26 compute-0 podman[234431]: 2026-01-27 19:00:26.334047593 +0000 UTC m=+0.104031966 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Jan 27 19:00:26 compute-0 python3.9[234535]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 19:00:28 compute-0 sudo[234689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dopkdcwicscthbdtbqvomsdjoswnqqrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540427.5513494-29-170645613101440/AnsiballZ_systemd.py'
Jan 27 19:00:28 compute-0 sudo[234689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:28 compute-0 python3.9[234691]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 27 19:00:28 compute-0 sudo[234689]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:29 compute-0 sudo[234842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydmiwbyazdlzuoeahsvsuyiagavnvqcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540429.1289692-37-96484036329827/AnsiballZ_setup.py'
Jan 27 19:00:29 compute-0 sudo[234842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:29 compute-0 podman[201378]: time="2026-01-27T19:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:00:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:00:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3841 "" "Go-http-client/1.1"
Jan 27 19:00:29 compute-0 python3.9[234844]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 19:00:30 compute-0 sudo[234842]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:30 compute-0 nova_compute[185480]: 2026-01-27 19:00:30.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:30 compute-0 nova_compute[185480]: 2026-01-27 19:00:30.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 19:00:30 compute-0 nova_compute[185480]: 2026-01-27 19:00:30.567 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 19:00:30 compute-0 nova_compute[185480]: 2026-01-27 19:00:30.568 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:30 compute-0 nova_compute[185480]: 2026-01-27 19:00:30.569 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 19:00:30 compute-0 nova_compute[185480]: 2026-01-27 19:00:30.584 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:30 compute-0 sudo[234926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zynkpqrynvbsxqkbedyoaziyhaamvtbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540429.1289692-37-96484036329827/AnsiballZ_dnf.py'
Jan 27 19:00:30 compute-0 sudo[234926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:31 compute-0 python3.9[234928]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 19:00:31 compute-0 openstack_network_exporter[204477]: ERROR   19:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:00:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:00:31 compute-0 openstack_network_exporter[204477]: ERROR   19:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:00:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.092 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.093 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db475670>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:00:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:00:32 compute-0 nova_compute[185480]: 2026-01-27 19:00:32.595 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:32 compute-0 nova_compute[185480]: 2026-01-27 19:00:32.596 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:00:32 compute-0 nova_compute[185480]: 2026-01-27 19:00:32.596 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:00:32 compute-0 nova_compute[185480]: 2026-01-27 19:00:32.616 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:00:33 compute-0 podman[234934]: 2026-01-27 19:00:33.309673813 +0000 UTC m=+0.081147594 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 27 19:00:33 compute-0 sudo[234926]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.537 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.537 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.538 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.538 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:00:34 compute-0 sudo[235103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qblmuzrfshpumkmgnvqntefgwflbpteb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540434.2669425-49-168475845563711/AnsiballZ_stat.py'
Jan 27 19:00:34 compute-0 sudo[235103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.910 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.912 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5765MB free_disk=72.4786491394043GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.912 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:00:34 compute-0 nova_compute[185480]: 2026-01-27 19:00:34.913 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:00:35 compute-0 python3.9[235105]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:35 compute-0 sudo[235103]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.062 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.064 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.150 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing inventories for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.237 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating ProviderTree inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.237 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.257 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing aggregate associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.284 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing trait associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, traits: HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.315 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.341 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.342 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:00:35 compute-0 nova_compute[185480]: 2026-01-27 19:00:35.343 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:00:35 compute-0 sudo[235226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npisgkbuikfirjaaoszwndwgddjcxasg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540434.2669425-49-168475845563711/AnsiballZ_copy.py'
Jan 27 19:00:35 compute-0 sudo[235226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:35 compute-0 python3.9[235228]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769540434.2669425-49-168475845563711/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:35 compute-0 sudo[235226]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:36 compute-0 nova_compute[185480]: 2026-01-27 19:00:36.343 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:36 compute-0 nova_compute[185480]: 2026-01-27 19:00:36.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:36 compute-0 nova_compute[185480]: 2026-01-27 19:00:36.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:00:36 compute-0 sudo[235378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybnyyxbupttrxtfmuqsgmhgspxiqzktq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540436.1795425-64-8126928473394/AnsiballZ_file.py'
Jan 27 19:00:36 compute-0 sudo[235378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:36 compute-0 podman[235380]: 2026-01-27 19:00:36.981593526 +0000 UTC m=+0.108810700 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:00:36 compute-0 podman[235382]: 2026-01-27 19:00:36.983659295 +0000 UTC m=+0.107434846 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 27 19:00:37 compute-0 podman[235381]: 2026-01-27 19:00:37.027269768 +0000 UTC m=+0.161943238 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 19:00:37 compute-0 python3.9[235383]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:37 compute-0 sudo[235378]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:37 compute-0 sudo[235596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhiybudjdwbwfuowlueezpwuwgizztet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540437.347259-72-27054417694292/AnsiballZ_stat.py'
Jan 27 19:00:37 compute-0 sudo[235596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:38 compute-0 python3.9[235598]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 19:00:38 compute-0 sudo[235596]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:38 compute-0 sudo[235719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqkqmfkhugcznzngabrvkqnigkyfacjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540437.347259-72-27054417694292/AnsiballZ_copy.py'
Jan 27 19:00:38 compute-0 sudo[235719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:38 compute-0 python3.9[235721]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769540437.347259-72-27054417694292/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 19:00:38 compute-0 sudo[235719]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:39 compute-0 sudo[235871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzrwejpfvwrxbremhdpnxmvqzznsqjfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769540439.1662354-87-259497695408378/AnsiballZ_systemd.py'
Jan 27 19:00:39 compute-0 sudo[235871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:00:39 compute-0 python3.9[235873]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 19:00:40 compute-0 systemd[1]: Stopping System Logging Service...
Jan 27 19:00:40 compute-0 rsyslogd[1006]: imjournal: 2595 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 27 19:00:40 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 27 19:00:40 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 27 19:00:40 compute-0 systemd[1]: Stopped System Logging Service.
Jan 27 19:00:40 compute-0 systemd[1]: rsyslog.service: Consumed 4.773s CPU time, 7.6M memory peak, read 0B from disk, written 5.7M to disk.
Jan 27 19:00:40 compute-0 systemd[1]: Starting System Logging Service...
Jan 27 19:00:40 compute-0 rsyslogd[235877]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="235877" x-info="https://www.rsyslog.com"] start
Jan 27 19:00:40 compute-0 systemd[1]: Started System Logging Service.
Jan 27 19:00:40 compute-0 rsyslogd[235877]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 19:00:40 compute-0 rsyslogd[235877]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 27 19:00:40 compute-0 rsyslogd[235877]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 27 19:00:40 compute-0 rsyslogd[235877]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 27 19:00:40 compute-0 sudo[235871]: pam_unix(sudo:session): session closed for user root
Jan 27 19:00:40 compute-0 rsyslogd[235877]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 27 19:00:41 compute-0 sshd-session[234365]: Connection closed by 192.168.122.31 port 35710
Jan 27 19:00:41 compute-0 sshd-session[234362]: pam_unix(sshd:session): session closed for user zuul
Jan 27 19:00:41 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 27 19:00:41 compute-0 systemd[1]: session-27.scope: Consumed 11.829s CPU time.
Jan 27 19:00:41 compute-0 systemd-logind[795]: Session 27 logged out. Waiting for processes to exit.
Jan 27 19:00:41 compute-0 systemd-logind[795]: Removed session 27.
Jan 27 19:00:42 compute-0 sshd-session[235906]: Invalid user sol from 45.148.10.240 port 43188
Jan 27 19:00:43 compute-0 sshd-session[235906]: Connection closed by invalid user sol 45.148.10.240 port 43188 [preauth]
Jan 27 19:00:51 compute-0 podman[235909]: 2026-01-27 19:00:51.347550494 +0000 UTC m=+0.107131059 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:00:51 compute-0 podman[235910]: 2026-01-27 19:00:51.354756655 +0000 UTC m=+0.107980020 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 19:00:53 compute-0 podman[235950]: 2026-01-27 19:00:53.347243941 +0000 UTC m=+0.119167695 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, name=ubi9, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.expose-services=)
Jan 27 19:00:57 compute-0 podman[235969]: 2026-01-27 19:00:57.325148744 +0000 UTC m=+0.108860350 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:00:59 compute-0 podman[201378]: time="2026-01-27T19:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:00:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:00:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3849 "" "Go-http-client/1.1"
Jan 27 19:01:01 compute-0 CROND[235990]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 19:01:01 compute-0 run-parts[235993]: (/etc/cron.hourly) starting 0anacron
Jan 27 19:01:01 compute-0 anacron[236001]: Anacron started on 2026-01-27
Jan 27 19:01:01 compute-0 anacron[236001]: Will run job `cron.daily' in 37 min.
Jan 27 19:01:01 compute-0 anacron[236001]: Will run job `cron.weekly' in 57 min.
Jan 27 19:01:01 compute-0 anacron[236001]: Will run job `cron.monthly' in 77 min.
Jan 27 19:01:01 compute-0 anacron[236001]: Jobs will be executed sequentially
Jan 27 19:01:01 compute-0 run-parts[236003]: (/etc/cron.hourly) finished 0anacron
Jan 27 19:01:01 compute-0 CROND[235989]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 19:01:01 compute-0 openstack_network_exporter[204477]: ERROR   19:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:01:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:01:01 compute-0 openstack_network_exporter[204477]: ERROR   19:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:01:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:01:04 compute-0 podman[236004]: 2026-01-27 19:01:04.340083832 +0000 UTC m=+0.104013068 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:01:07 compute-0 podman[236022]: 2026-01-27 19:01:07.303830221 +0000 UTC m=+0.086688236 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 19:01:07 compute-0 podman[236024]: 2026-01-27 19:01:07.320391813 +0000 UTC m=+0.092592859 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 19:01:07 compute-0 podman[236023]: 2026-01-27 19:01:07.343856733 +0000 UTC m=+0.121478751 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:01:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:01:20.501 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:01:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:01:20.502 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:01:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:01:20.502 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:01:21 compute-0 sshd-session[236089]: Accepted publickey for zuul from 38.102.83.144 port 49046 ssh2: RSA SHA256:jhFpR9mdpMGvU2F0q/HJAkqGxozs6TWh9oCwMxPPlpE
Jan 27 19:01:21 compute-0 systemd-logind[795]: New session 28 of user zuul.
Jan 27 19:01:21 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 27 19:01:21 compute-0 sshd-session[236089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 19:01:21 compute-0 podman[236092]: 2026-01-27 19:01:21.533096493 +0000 UTC m=+0.119824410 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:01:21 compute-0 podman[236093]: 2026-01-27 19:01:21.561648186 +0000 UTC m=+0.136231379 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi)
Jan 27 19:01:22 compute-0 python3[236308]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 19:01:24 compute-0 podman[236436]: 2026-01-27 19:01:24.351919783 +0000 UTC m=+0.115566257 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_id=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Jan 27 19:01:24 compute-0 sudo[236546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwhpifxoyijkmmwolyyvgusdgiehuta ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540484.149809-36938-256022056607543/AnsiballZ_command.py'
Jan 27 19:01:24 compute-0 sudo[236546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:01:24 compute-0 python3[236548]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:01:25 compute-0 sudo[236546]: pam_unix(sudo:session): session closed for user root
Jan 27 19:01:25 compute-0 sudo[236699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vebxgwdtbvaoudeapztpcurgklhdbtda ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540485.3404496-36949-257634310294487/AnsiballZ_command.py'
Jan 27 19:01:25 compute-0 sudo[236699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:01:25 compute-0 python3[236701]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:01:27 compute-0 sudo[236699]: pam_unix(sudo:session): session closed for user root
Jan 27 19:01:28 compute-0 podman[236728]: 2026-01-27 19:01:28.312895919 +0000 UTC m=+0.096145695 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible)
Jan 27 19:01:29 compute-0 python3[236873]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 19:01:29 compute-0 podman[201378]: time="2026-01-27T19:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:01:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:01:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3852 "" "Go-http-client/1.1"
Jan 27 19:01:30 compute-0 sudo[237024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uukgpdjzoucuhgvvvxmgptxzcyjmqhae ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540489.5031462-36995-32727165375556/AnsiballZ_setup.py'
Jan 27 19:01:30 compute-0 sudo[237024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:01:30 compute-0 python3[237026]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 19:01:31 compute-0 openstack_network_exporter[204477]: ERROR   19:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:01:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:01:31 compute-0 openstack_network_exporter[204477]: ERROR   19:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:01:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:01:31 compute-0 sudo[237024]: pam_unix(sudo:session): session closed for user root
Jan 27 19:01:32 compute-0 sudo[237249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlvqoiwpbiqrocdvwlbhvvdivgfvytv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540492.0892274-37026-213638953285859/AnsiballZ_command.py'
Jan 27 19:01:32 compute-0 sudo[237249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:01:32 compute-0 python3[237251]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:01:32 compute-0 sudo[237249]: pam_unix(sudo:session): session closed for user root
Jan 27 19:01:33 compute-0 nova_compute[185480]: 2026-01-27 19:01:33.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:33 compute-0 nova_compute[185480]: 2026-01-27 19:01:33.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:01:33 compute-0 nova_compute[185480]: 2026-01-27 19:01:33.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:01:33 compute-0 nova_compute[185480]: 2026-01-27 19:01:33.563 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:01:33 compute-0 sudo[237414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drtfbukppcpzytcbqriadnnzjjphjctd ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769540493.1900086-37043-101978170593381/AnsiballZ_command.py'
Jan 27 19:01:33 compute-0 sudo[237414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:01:33 compute-0 python3[237416]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:01:33 compute-0 sudo[237414]: pam_unix(sudo:session): session closed for user root
Jan 27 19:01:34 compute-0 nova_compute[185480]: 2026-01-27 19:01:34.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:34 compute-0 nova_compute[185480]: 2026-01-27 19:01:34.537 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:34 compute-0 nova_compute[185480]: 2026-01-27 19:01:34.562 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:01:34 compute-0 nova_compute[185480]: 2026-01-27 19:01:34.562 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:01:34 compute-0 nova_compute[185480]: 2026-01-27 19:01:34.563 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:01:34 compute-0 nova_compute[185480]: 2026-01-27 19:01:34.563 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.005 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.006 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5674MB free_disk=72.47747421264648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.007 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.007 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.263 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.264 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.292 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.313 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.316 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:01:35 compute-0 nova_compute[185480]: 2026-01-27 19:01:35.317 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:01:35 compute-0 podman[237456]: 2026-01-27 19:01:35.347941303 +0000 UTC m=+0.114708386 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:01:37 compute-0 nova_compute[185480]: 2026-01-27 19:01:37.295 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:37 compute-0 nova_compute[185480]: 2026-01-27 19:01:37.296 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:37 compute-0 nova_compute[185480]: 2026-01-27 19:01:37.296 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:37 compute-0 nova_compute[185480]: 2026-01-27 19:01:37.296 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:37 compute-0 nova_compute[185480]: 2026-01-27 19:01:37.297 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:01:37 compute-0 nova_compute[185480]: 2026-01-27 19:01:37.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:38 compute-0 podman[237473]: 2026-01-27 19:01:38.344785947 +0000 UTC m=+0.111447077 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:01:38 compute-0 podman[237475]: 2026-01-27 19:01:38.344376517 +0000 UTC m=+0.109984592 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 19:01:38 compute-0 podman[237474]: 2026-01-27 19:01:38.407262214 +0000 UTC m=+0.167736864 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 19:01:38 compute-0 nova_compute[185480]: 2026-01-27 19:01:38.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:38 compute-0 nova_compute[185480]: 2026-01-27 19:01:38.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:01:52 compute-0 podman[237538]: 2026-01-27 19:01:52.303128272 +0000 UTC m=+0.074995982 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:01:52 compute-0 podman[237539]: 2026-01-27 19:01:52.323175429 +0000 UTC m=+0.082342990 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 27 19:01:55 compute-0 podman[237580]: 2026-01-27 19:01:55.315930154 +0000 UTC m=+0.089092284 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git)
Jan 27 19:01:56 compute-0 sshd-session[237601]: error: kex_exchange_identification: read: Connection reset by peer
Jan 27 19:01:56 compute-0 sshd-session[237601]: Connection reset by 176.120.22.52 port 59872
Jan 27 19:01:59 compute-0 podman[237602]: 2026-01-27 19:01:59.303391753 +0000 UTC m=+0.086868281 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 19:01:59 compute-0 podman[201378]: time="2026-01-27T19:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:01:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:01:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3849 "" "Go-http-client/1.1"
Jan 27 19:02:01 compute-0 openstack_network_exporter[204477]: ERROR   19:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:02:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:02:01 compute-0 openstack_network_exporter[204477]: ERROR   19:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:02:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:02:06 compute-0 podman[237624]: 2026-01-27 19:02:06.298576481 +0000 UTC m=+0.074644404 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:02:09 compute-0 podman[237644]: 2026-01-27 19:02:09.326438508 +0000 UTC m=+0.099119108 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:02:09 compute-0 podman[237646]: 2026-01-27 19:02:09.339550486 +0000 UTC m=+0.106175290 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 19:02:09 compute-0 podman[237645]: 2026-01-27 19:02:09.356928518 +0000 UTC m=+0.127170549 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 19:02:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:02:20.503 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:02:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:02:20.503 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:02:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:02:20.503 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:02:23 compute-0 podman[237710]: 2026-01-27 19:02:23.295652346 +0000 UTC m=+0.069929379 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:02:23 compute-0 podman[237711]: 2026-01-27 19:02:23.322473967 +0000 UTC m=+0.074485369 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 19:02:26 compute-0 podman[237752]: 2026-01-27 19:02:26.291395243 +0000 UTC m=+0.067084130 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, container_name=kepler, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=kepler, architecture=x86_64, com.redhat.component=ubi9-container)
Jan 27 19:02:29 compute-0 podman[201378]: time="2026-01-27T19:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:02:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:02:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3853 "" "Go-http-client/1.1"
Jan 27 19:02:30 compute-0 podman[237769]: 2026-01-27 19:02:30.364722338 +0000 UTC m=+0.130131581 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Jan 27 19:02:31 compute-0 openstack_network_exporter[204477]: ERROR   19:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:02:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:02:31 compute-0 openstack_network_exporter[204477]: ERROR   19:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:02:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.093 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.093 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c98290>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:02:32.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:02:33 compute-0 sshd-session[236105]: Received disconnect from 38.102.83.144 port 49046:11: disconnected by user
Jan 27 19:02:33 compute-0 sshd-session[236105]: Disconnected from user zuul 38.102.83.144 port 49046
Jan 27 19:02:33 compute-0 sshd-session[236089]: pam_unix(sshd:session): session closed for user zuul
Jan 27 19:02:33 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 27 19:02:33 compute-0 systemd[1]: session-28.scope: Consumed 10.624s CPU time.
Jan 27 19:02:33 compute-0 systemd-logind[795]: Session 28 logged out. Waiting for processes to exit.
Jan 27 19:02:33 compute-0 systemd-logind[795]: Removed session 28.
Jan 27 19:02:34 compute-0 nova_compute[185480]: 2026-01-27 19:02:34.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:34 compute-0 nova_compute[185480]: 2026-01-27 19:02:34.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:02:34 compute-0 nova_compute[185480]: 2026-01-27 19:02:34.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:02:34 compute-0 nova_compute[185480]: 2026-01-27 19:02:34.564 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.561 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.562 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.562 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.563 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.900 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.901 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5705MB free_disk=72.47747421264648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.902 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.902 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.962 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.963 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:02:36 compute-0 nova_compute[185480]: 2026-01-27 19:02:36.984 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:02:37 compute-0 nova_compute[185480]: 2026-01-27 19:02:37.000 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:02:37 compute-0 nova_compute[185480]: 2026-01-27 19:02:37.001 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:02:37 compute-0 nova_compute[185480]: 2026-01-27 19:02:37.002 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:02:37 compute-0 podman[237790]: 2026-01-27 19:02:37.331302547 +0000 UTC m=+0.096834802 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 19:02:38 compute-0 nova_compute[185480]: 2026-01-27 19:02:38.003 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:38 compute-0 nova_compute[185480]: 2026-01-27 19:02:38.004 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:38 compute-0 nova_compute[185480]: 2026-01-27 19:02:38.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:38 compute-0 nova_compute[185480]: 2026-01-27 19:02:38.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:40 compute-0 podman[237812]: 2026-01-27 19:02:40.303584754 +0000 UTC m=+0.074938950 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 19:02:40 compute-0 podman[237810]: 2026-01-27 19:02:40.308316659 +0000 UTC m=+0.081789407 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:02:40 compute-0 podman[237811]: 2026-01-27 19:02:40.336913154 +0000 UTC m=+0.109551211 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 19:02:40 compute-0 nova_compute[185480]: 2026-01-27 19:02:40.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:40 compute-0 nova_compute[185480]: 2026-01-27 19:02:40.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:02:54 compute-0 podman[237877]: 2026-01-27 19:02:54.275252173 +0000 UTC m=+0.053308776 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:02:54 compute-0 podman[237878]: 2026-01-27 19:02:54.314415874 +0000 UTC m=+0.074424518 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 19:02:57 compute-0 podman[237920]: 2026-01-27 19:02:57.300893556 +0000 UTC m=+0.079625315 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=kepler, release-0.7.12=)
Jan 27 19:02:59 compute-0 podman[201378]: time="2026-01-27T19:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:02:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:02:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3857 "" "Go-http-client/1.1"
Jan 27 19:03:00 compute-0 sshd-session[237938]: Invalid user funded from 45.148.10.240 port 57468
Jan 27 19:03:00 compute-0 sshd-session[237938]: Connection closed by invalid user funded 45.148.10.240 port 57468 [preauth]
Jan 27 19:03:01 compute-0 podman[237940]: 2026-01-27 19:03:01.311647091 +0000 UTC m=+0.084761939 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, container_name=openstack_network_exporter)
Jan 27 19:03:01 compute-0 openstack_network_exporter[204477]: ERROR   19:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:03:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:03:01 compute-0 openstack_network_exporter[204477]: ERROR   19:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:03:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:03:08 compute-0 podman[237960]: 2026-01-27 19:03:08.299226685 +0000 UTC m=+0.077192525 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 27 19:03:11 compute-0 podman[237982]: 2026-01-27 19:03:11.31219879 +0000 UTC m=+0.071112909 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 27 19:03:11 compute-0 podman[237980]: 2026-01-27 19:03:11.322141132 +0000 UTC m=+0.088880030 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:03:11 compute-0 podman[237981]: 2026-01-27 19:03:11.336026589 +0000 UTC m=+0.110179517 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller)
Jan 27 19:03:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:03:20.505 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:03:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:03:20.505 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:03:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:03:20.506 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:03:25 compute-0 podman[238045]: 2026-01-27 19:03:25.346811022 +0000 UTC m=+0.121950277 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:03:25 compute-0 podman[238046]: 2026-01-27 19:03:25.353488194 +0000 UTC m=+0.122739524 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 19:03:28 compute-0 podman[238084]: 2026-01-27 19:03:28.351559319 +0000 UTC m=+0.126791435 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, version=9.4, container_name=kepler, release-0.7.12=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 19:03:29 compute-0 podman[201378]: time="2026-01-27T19:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:03:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:03:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3854 "" "Go-http-client/1.1"
Jan 27 19:03:31 compute-0 openstack_network_exporter[204477]: ERROR   19:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:03:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:03:31 compute-0 openstack_network_exporter[204477]: ERROR   19:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:03:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:03:32 compute-0 podman[238104]: 2026-01-27 19:03:32.301248755 +0000 UTC m=+0.077652992 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:03:36 compute-0 nova_compute[185480]: 2026-01-27 19:03:36.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:36 compute-0 nova_compute[185480]: 2026-01-27 19:03:36.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:03:36 compute-0 nova_compute[185480]: 2026-01-27 19:03:36.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:03:36 compute-0 nova_compute[185480]: 2026-01-27 19:03:36.533 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:03:36 compute-0 nova_compute[185480]: 2026-01-27 19:03:36.534 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:36 compute-0 nova_compute[185480]: 2026-01-27 19:03:36.534 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.553 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.554 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.554 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.555 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.936 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.937 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5707MB free_disk=72.47747039794922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.937 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:03:37 compute-0 nova_compute[185480]: 2026-01-27 19:03:37.938 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:03:38 compute-0 nova_compute[185480]: 2026-01-27 19:03:38.009 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:03:38 compute-0 nova_compute[185480]: 2026-01-27 19:03:38.010 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:03:38 compute-0 nova_compute[185480]: 2026-01-27 19:03:38.038 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:03:38 compute-0 nova_compute[185480]: 2026-01-27 19:03:38.057 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:03:38 compute-0 nova_compute[185480]: 2026-01-27 19:03:38.059 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:03:38 compute-0 nova_compute[185480]: 2026-01-27 19:03:38.059 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:03:39 compute-0 nova_compute[185480]: 2026-01-27 19:03:39.054 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:39 compute-0 nova_compute[185480]: 2026-01-27 19:03:39.090 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:39 compute-0 podman[238124]: 2026-01-27 19:03:39.31745659 +0000 UTC m=+0.099719862 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true)
Jan 27 19:03:39 compute-0 nova_compute[185480]: 2026-01-27 19:03:39.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:39 compute-0 nova_compute[185480]: 2026-01-27 19:03:39.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:41 compute-0 nova_compute[185480]: 2026-01-27 19:03:41.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:42 compute-0 podman[238146]: 2026-01-27 19:03:42.311960396 +0000 UTC m=+0.079725973 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 27 19:03:42 compute-0 podman[238144]: 2026-01-27 19:03:42.329235089 +0000 UTC m=+0.100752147 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:03:42 compute-0 podman[238145]: 2026-01-27 19:03:42.352049037 +0000 UTC m=+0.120225553 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 19:03:42 compute-0 nova_compute[185480]: 2026-01-27 19:03:42.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:03:48 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:03:48.771 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:03:48 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:03:48.773 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:03:48 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:03:48.774 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:03:56 compute-0 podman[238209]: 2026-01-27 19:03:56.313989953 +0000 UTC m=+0.077147989 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:03:56 compute-0 podman[238210]: 2026-01-27 19:03:56.341244099 +0000 UTC m=+0.092221798 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:03:59 compute-0 podman[238251]: 2026-01-27 19:03:59.298082834 +0000 UTC m=+0.078580594 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, vcs-type=git, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:03:59 compute-0 podman[201378]: time="2026-01-27T19:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:03:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:03:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3851 "" "Go-http-client/1.1"
Jan 27 19:04:01 compute-0 openstack_network_exporter[204477]: ERROR   19:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:04:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:04:01 compute-0 openstack_network_exporter[204477]: ERROR   19:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:04:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:04:03 compute-0 podman[238271]: 2026-01-27 19:04:03.296821921 +0000 UTC m=+0.076874532 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Jan 27 19:04:10 compute-0 podman[238292]: 2026-01-27 19:04:10.310120115 +0000 UTC m=+0.087391709 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 27 19:04:13 compute-0 podman[238312]: 2026-01-27 19:04:13.293817437 +0000 UTC m=+0.066804226 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 19:04:13 compute-0 podman[238310]: 2026-01-27 19:04:13.316147434 +0000 UTC m=+0.097006765 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:04:13 compute-0 podman[238311]: 2026-01-27 19:04:13.341828742 +0000 UTC m=+0.118865480 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 19:04:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:04:20.506 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:04:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:04:20.507 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:04:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:04:20.507 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:04:27 compute-0 podman[238375]: 2026-01-27 19:04:27.295950375 +0000 UTC m=+0.069870911 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:04:27 compute-0 podman[238376]: 2026-01-27 19:04:27.301254574 +0000 UTC m=+0.073631902 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:04:29 compute-0 podman[201378]: time="2026-01-27T19:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:04:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:04:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3866 "" "Go-http-client/1.1"
Jan 27 19:04:29 compute-0 podman[238417]: 2026-01-27 19:04:29.890622935 +0000 UTC m=+0.101009493 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, release=1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 19:04:31 compute-0 openstack_network_exporter[204477]: ERROR   19:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:04:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:04:31 compute-0 openstack_network_exporter[204477]: ERROR   19:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:04:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.094 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.094 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d825cc50>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:04:32.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:04:34 compute-0 podman[238438]: 2026-01-27 19:04:34.315104333 +0000 UTC m=+0.094354851 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Jan 27 19:04:37 compute-0 nova_compute[185480]: 2026-01-27 19:04:37.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:37 compute-0 nova_compute[185480]: 2026-01-27 19:04:37.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:04:37 compute-0 nova_compute[185480]: 2026-01-27 19:04:37.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:04:37 compute-0 nova_compute[185480]: 2026-01-27 19:04:37.573 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:04:38 compute-0 nova_compute[185480]: 2026-01-27 19:04:38.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:38 compute-0 nova_compute[185480]: 2026-01-27 19:04:38.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.591 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.591 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.592 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.593 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.997 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.998 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5692MB free_disk=72.47808837890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.998 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:04:39 compute-0 nova_compute[185480]: 2026-01-27 19:04:39.998 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:04:40 compute-0 nova_compute[185480]: 2026-01-27 19:04:40.319 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:04:40 compute-0 nova_compute[185480]: 2026-01-27 19:04:40.320 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:04:40 compute-0 nova_compute[185480]: 2026-01-27 19:04:40.352 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:04:40 compute-0 nova_compute[185480]: 2026-01-27 19:04:40.432 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:04:40 compute-0 nova_compute[185480]: 2026-01-27 19:04:40.434 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:04:40 compute-0 nova_compute[185480]: 2026-01-27 19:04:40.434 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:04:41 compute-0 podman[238459]: 2026-01-27 19:04:41.345524808 +0000 UTC m=+0.108395775 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:04:41 compute-0 nova_compute[185480]: 2026-01-27 19:04:41.435 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:04:42.130 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:04:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:04:42.131 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:04:42 compute-0 nova_compute[185480]: 2026-01-27 19:04:42.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:42 compute-0 nova_compute[185480]: 2026-01-27 19:04:42.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:04:43 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:04:43.133 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:04:44 compute-0 podman[238480]: 2026-01-27 19:04:44.30716876 +0000 UTC m=+0.081465145 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 19:04:44 compute-0 podman[238478]: 2026-01-27 19:04:44.330262565 +0000 UTC m=+0.112682529 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:04:44 compute-0 podman[238479]: 2026-01-27 19:04:44.368064881 +0000 UTC m=+0.145608785 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 19:04:58 compute-0 podman[238543]: 2026-01-27 19:04:58.304234156 +0000 UTC m=+0.080582104 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:04:58 compute-0 podman[238544]: 2026-01-27 19:04:58.349172396 +0000 UTC m=+0.119024184 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 19:04:59 compute-0 podman[201378]: time="2026-01-27T19:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:04:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:04:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3861 "" "Go-http-client/1.1"
Jan 27 19:05:00 compute-0 podman[238585]: 2026-01-27 19:05:00.301419411 +0000 UTC m=+0.080090581 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, config_id=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler)
Jan 27 19:05:01 compute-0 openstack_network_exporter[204477]: ERROR   19:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:05:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:05:01 compute-0 openstack_network_exporter[204477]: ERROR   19:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:05:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.002 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.003 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.032 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.183 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.184 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.196 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.197 185484 INFO nova.compute.claims [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:05:05 compute-0 podman[238605]: 2026-01-27 19:05:05.303567828 +0000 UTC m=+0.083914395 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.361 185484 DEBUG nova.compute.provider_tree [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.375 185484 DEBUG nova.scheduler.client.report [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.431 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.432 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.471 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.472 185484 DEBUG nova.network.neutron [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.493 185484 INFO nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.526 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.629 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.631 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.632 185484 INFO nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Creating image(s)
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.632 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.633 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.634 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.634 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:05 compute-0 nova_compute[185480]: 2026-01-27 19:05:05.635 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:06 compute-0 nova_compute[185480]: 2026-01-27 19:05:06.768 185484 WARNING oslo_policy.policy [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 27 19:05:06 compute-0 nova_compute[185480]: 2026-01-27 19:05:06.769 185484 WARNING oslo_policy.policy [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.220 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.293 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97.part --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.296 185484 DEBUG nova.virt.images [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] 525193b7-cb5a-4d63-9747-3b917622bbe3 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.300 185484 DEBUG nova.privsep.utils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.302 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97.part /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.526 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97.part /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97.converted" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.533 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.594 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.596 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:07 compute-0 nova_compute[185480]: 2026-01-27 19:05:07.624 185484 INFO oslo.privsep.daemon [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpfj9_wdnb/privsep.sock']
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.387 185484 INFO oslo.privsep.daemon [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Spawned new privsep daemon via rootwrap
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.235 238642 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.240 238642 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.242 238642 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.242 238642 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238642
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.471 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.531 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.532 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.533 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.544 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.599 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.600 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97,backing_fmt=raw /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.654 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97,backing_fmt=raw /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.655 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.656 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.740 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.741 185484 DEBUG nova.virt.disk.api [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Checking if we can resize image /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.742 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.806 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.807 185484 DEBUG nova.virt.disk.api [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Cannot resize image /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.807 185484 DEBUG nova.objects.instance [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'migration_context' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.824 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.825 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.826 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.827 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.827 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.828 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.850 185484 DEBUG nova.network.neutron [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Successfully created port: b7e20f48-5e15-4381-8111-2bbf9ae03610 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.867 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.868 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.912 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.914 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:08 compute-0 nova_compute[185480]: 2026-01-27 19:05:08.942 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.014 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.016 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.017 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.032 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.119 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.120 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.159 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.160 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.161 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.227 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.229 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.229 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Ensure instance console log exists: /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.230 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.230 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:09 compute-0 nova_compute[185480]: 2026-01-27 19:05:09.231 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:11 compute-0 nova_compute[185480]: 2026-01-27 19:05:11.121 185484 DEBUG nova.network.neutron [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Successfully updated port: b7e20f48-5e15-4381-8111-2bbf9ae03610 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:05:11 compute-0 nova_compute[185480]: 2026-01-27 19:05:11.171 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:05:11 compute-0 nova_compute[185480]: 2026-01-27 19:05:11.172 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:05:11 compute-0 nova_compute[185480]: 2026-01-27 19:05:11.172 185484 DEBUG nova.network.neutron [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:05:11 compute-0 nova_compute[185480]: 2026-01-27 19:05:11.643 185484 DEBUG nova.compute.manager [req-cb977b16-e2fa-4002-b202-4bcf8bbe857b req-d46f22d4-af64-42ad-bc4f-e2ab420900aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received event network-changed-b7e20f48-5e15-4381-8111-2bbf9ae03610 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:05:11 compute-0 nova_compute[185480]: 2026-01-27 19:05:11.644 185484 DEBUG nova.compute.manager [req-cb977b16-e2fa-4002-b202-4bcf8bbe857b req-d46f22d4-af64-42ad-bc4f-e2ab420900aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Refreshing instance network info cache due to event network-changed-b7e20f48-5e15-4381-8111-2bbf9ae03610. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:05:11 compute-0 nova_compute[185480]: 2026-01-27 19:05:11.645 185484 DEBUG oslo_concurrency.lockutils [req-cb977b16-e2fa-4002-b202-4bcf8bbe857b req-d46f22d4-af64-42ad-bc4f-e2ab420900aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:05:11 compute-0 nova_compute[185480]: 2026-01-27 19:05:11.766 185484 DEBUG nova.network.neutron [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:05:12 compute-0 podman[238675]: 2026-01-27 19:05:12.309780868 +0000 UTC m=+0.083557306 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.770 185484 DEBUG nova.network.neutron [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.795 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.796 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Instance network_info: |[{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.798 185484 DEBUG oslo_concurrency.lockutils [req-cb977b16-e2fa-4002-b202-4bcf8bbe857b req-d46f22d4-af64-42ad-bc4f-e2ab420900aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.799 185484 DEBUG nova.network.neutron [req-cb977b16-e2fa-4002-b202-4bcf8bbe857b req-d46f22d4-af64-42ad-bc4f-e2ab420900aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Refreshing network info cache for port b7e20f48-5e15-4381-8111-2bbf9ae03610 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.806 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Start _get_guest_xml network_info=[{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:03:35Z,direct_url=<?>,disk_format='qcow2',id=525193b7-cb5a-4d63-9747-3b917622bbe3,min_disk=0,min_ram=0,name='cirros',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:03:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.823 185484 WARNING nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.831 185484 DEBUG nova.virt.libvirt.host [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.831 185484 DEBUG nova.virt.libvirt.host [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.837 185484 DEBUG nova.virt.libvirt.host [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.837 185484 DEBUG nova.virt.libvirt.host [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.838 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.839 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:03:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc7c8c58-0a2b-4396-9f89-7ff8e35afa36',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:03:35Z,direct_url=<?>,disk_format='qcow2',id=525193b7-cb5a-4d63-9747-3b917622bbe3,min_disk=0,min_ram=0,name='cirros',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:03:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.839 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.840 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.840 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.841 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.841 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.841 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.842 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.842 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.843 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.843 185484 DEBUG nova.virt.hardware [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.848 185484 DEBUG nova.privsep.utils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.850 185484 DEBUG nova.virt.libvirt.vif [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-itl5lg4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:05:05Z,user_data=None,user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=b6b280bb-d859-43f3-836a-f93d00510948,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.851 185484 DEBUG nova.network.os_vif_util [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.852 185484 DEBUG nova.network.os_vif_util [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:d9:f9,bridge_name='br-int',has_traffic_filtering=True,id=b7e20f48-5e15-4381-8111-2bbf9ae03610,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e20f48-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.854 185484 DEBUG nova.objects.instance [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.872 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <uuid>b6b280bb-d859-43f3-836a-f93d00510948</uuid>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <name>instance-00000001</name>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <memory>524288</memory>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <nova:name>test_0</nova:name>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:05:12</nova:creationTime>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <nova:flavor name="m1.small">
Jan 27 19:05:12 compute-0 nova_compute[185480]:         <nova:memory>512</nova:memory>
Jan 27 19:05:12 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:05:12 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:05:12 compute-0 nova_compute[185480]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 19:05:12 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:05:12 compute-0 nova_compute[185480]:         <nova:user uuid="6d30d46dc88a4403b3a241949384d8f7">admin</nova:user>
Jan 27 19:05:12 compute-0 nova_compute[185480]:         <nova:project uuid="f04ec1493db14ca1adbb4b6abd1667b1">admin</nova:project>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="525193b7-cb5a-4d63-9747-3b917622bbe3"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:05:12 compute-0 nova_compute[185480]:         <nova:port uuid="b7e20f48-5e15-4381-8111-2bbf9ae03610">
Jan 27 19:05:12 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="192.168.0.162" ipVersion="4"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <system>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <entry name="serial">b6b280bb-d859-43f3-836a-f93d00510948</entry>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <entry name="uuid">b6b280bb-d859-43f3-836a-f93d00510948</entry>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </system>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <os>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   </os>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <features>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   </features>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <target dev="vdb" bus="virtio"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.config"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:74:d9:f9"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <target dev="tapb7e20f48-5e"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/console.log" append="off"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <video>
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </video>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:05:12 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:05:12 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:05:12 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:05:12 compute-0 nova_compute[185480]: </domain>
Jan 27 19:05:12 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.874 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Preparing to wait for external event network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.874 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.874 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.875 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.875 185484 DEBUG nova.virt.libvirt.vif [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-itl5lg4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:05:05Z,user_data=None,user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=b6b280bb-d859-43f3-836a-f93d00510948,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.876 185484 DEBUG nova.network.os_vif_util [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.876 185484 DEBUG nova.network.os_vif_util [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:d9:f9,bridge_name='br-int',has_traffic_filtering=True,id=b7e20f48-5e15-4381-8111-2bbf9ae03610,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e20f48-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.876 185484 DEBUG os_vif [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:d9:f9,bridge_name='br-int',has_traffic_filtering=True,id=b7e20f48-5e15-4381-8111-2bbf9ae03610,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e20f48-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.909 185484 DEBUG ovsdbapp.backend.ovs_idl [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.909 185484 DEBUG ovsdbapp.backend.ovs_idl [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.909 185484 DEBUG ovsdbapp.backend.ovs_idl [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.910 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.911 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.912 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.913 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.915 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.918 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.926 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.926 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.926 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:05:12 compute-0 nova_compute[185480]: 2026-01-27 19:05:12.927 185484 INFO oslo.privsep.daemon [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpuzz1sa38/privsep.sock']
Jan 27 19:05:13 compute-0 nova_compute[185480]: 2026-01-27 19:05:13.737 185484 INFO oslo.privsep.daemon [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Spawned new privsep daemon via rootwrap
Jan 27 19:05:13 compute-0 nova_compute[185480]: 2026-01-27 19:05:13.553 238697 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 19:05:13 compute-0 nova_compute[185480]: 2026-01-27 19:05:13.561 238697 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 19:05:13 compute-0 nova_compute[185480]: 2026-01-27 19:05:13.564 238697 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 27 19:05:13 compute-0 nova_compute[185480]: 2026-01-27 19:05:13.565 238697 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238697
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.045 185484 DEBUG nova.network.neutron [req-cb977b16-e2fa-4002-b202-4bcf8bbe857b req-d46f22d4-af64-42ad-bc4f-e2ab420900aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated VIF entry in instance network info cache for port b7e20f48-5e15-4381-8111-2bbf9ae03610. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.046 185484 DEBUG nova.network.neutron [req-cb977b16-e2fa-4002-b202-4bcf8bbe857b req-d46f22d4-af64-42ad-bc4f-e2ab420900aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.067 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.068 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7e20f48-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.070 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7e20f48-5e, col_values=(('external_ids', {'iface-id': 'b7e20f48-5e15-4381-8111-2bbf9ae03610', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:d9:f9', 'vm-uuid': 'b6b280bb-d859-43f3-836a-f93d00510948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.074 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:14 compute-0 NetworkManager[56191]: <info>  [1769540714.0762] manager: (tapb7e20f48-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.077 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.088 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.091 185484 INFO os_vif [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:d9:f9,bridge_name='br-int',has_traffic_filtering=True,id=b7e20f48-5e15-4381-8111-2bbf9ae03610,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e20f48-5e')
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.119 185484 DEBUG oslo_concurrency.lockutils [req-cb977b16-e2fa-4002-b202-4bcf8bbe857b req-d46f22d4-af64-42ad-bc4f-e2ab420900aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.224 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.226 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.227 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.227 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No VIF found with MAC fa:16:3e:74:d9:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.229 185484 INFO nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Using config drive
Jan 27 19:05:14 compute-0 podman[238703]: 2026-01-27 19:05:14.773179555 +0000 UTC m=+0.083014142 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 19:05:14 compute-0 podman[238705]: 2026-01-27 19:05:14.794367395 +0000 UTC m=+0.100853531 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 19:05:14 compute-0 podman[238704]: 2026-01-27 19:05:14.821446877 +0000 UTC m=+0.121706800 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.909 185484 INFO nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Creating config drive at /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.config
Jan 27 19:05:14 compute-0 nova_compute[185480]: 2026-01-27 19:05:14.920 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4l3j9qzl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.057 185484 DEBUG oslo_concurrency.processutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4l3j9qzl" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:15 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 27 19:05:15 compute-0 kernel: tapb7e20f48-5e: entered promiscuous mode
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.158 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:15 compute-0 ovn_controller[97647]: 2026-01-27T19:05:15Z|00027|binding|INFO|Claiming lport b7e20f48-5e15-4381-8111-2bbf9ae03610 for this chassis.
Jan 27 19:05:15 compute-0 ovn_controller[97647]: 2026-01-27T19:05:15Z|00028|binding|INFO|b7e20f48-5e15-4381-8111-2bbf9ae03610: Claiming fa:16:3e:74:d9:f9 192.168.0.162
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.163 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:15 compute-0 NetworkManager[56191]: <info>  [1769540715.1656] manager: (tapb7e20f48-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Jan 27 19:05:15 compute-0 systemd-udevd[238790]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.194 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:d9:f9 192.168.0.162'], port_security=['fa:16:3e:74:d9:f9 192.168.0.162'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.162/24', 'neutron:device_id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a99badb-bb64-4e2f-95a8-78f317eb6676', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33ebbec8-56f4-45ac-84a6-f80dd4a7c167, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=b7e20f48-5e15-4381-8111-2bbf9ae03610) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.195 106898 INFO neutron.agent.ovn.metadata.agent [-] Port b7e20f48-5e15-4381-8111-2bbf9ae03610 in datapath 4f32262d-dee8-406b-8a5a-09e95f48c8d5 bound to our chassis
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.197 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.198 106898 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpvgdzo1u0/privsep.sock']
Jan 27 19:05:15 compute-0 NetworkManager[56191]: <info>  [1769540715.2204] device (tapb7e20f48-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:05:15 compute-0 NetworkManager[56191]: <info>  [1769540715.2212] device (tapb7e20f48-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.246 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:15 compute-0 ovn_controller[97647]: 2026-01-27T19:05:15Z|00029|binding|INFO|Setting lport b7e20f48-5e15-4381-8111-2bbf9ae03610 ovn-installed in OVS
Jan 27 19:05:15 compute-0 ovn_controller[97647]: 2026-01-27T19:05:15Z|00030|binding|INFO|Setting lport b7e20f48-5e15-4381-8111-2bbf9ae03610 up in Southbound
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.254 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:15 compute-0 systemd-machined[156762]: New machine qemu-1-instance-00000001.
Jan 27 19:05:15 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.701 185484 DEBUG nova.compute.manager [req-c102edc1-ab39-4b5f-9fea-725e4dbe4895 req-57c3df18-e1f7-4d0c-8b50-38e7d3c13f9d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received event network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.701 185484 DEBUG oslo_concurrency.lockutils [req-c102edc1-ab39-4b5f-9fea-725e4dbe4895 req-57c3df18-e1f7-4d0c-8b50-38e7d3c13f9d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.702 185484 DEBUG oslo_concurrency.lockutils [req-c102edc1-ab39-4b5f-9fea-725e4dbe4895 req-57c3df18-e1f7-4d0c-8b50-38e7d3c13f9d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.702 185484 DEBUG oslo_concurrency.lockutils [req-c102edc1-ab39-4b5f-9fea-725e4dbe4895 req-57c3df18-e1f7-4d0c-8b50-38e7d3c13f9d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.702 185484 DEBUG nova.compute.manager [req-c102edc1-ab39-4b5f-9fea-725e4dbe4895 req-57c3df18-e1f7-4d0c-8b50-38e7d3c13f9d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Processing event network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:05:15 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 19:05:15 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.830 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540715.8292055, b6b280bb-d859-43f3-836a-f93d00510948 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.831 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] VM Started (Lifecycle Event)
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.835 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.858 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.864 185484 INFO nova.virt.libvirt.driver [-] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Instance spawned successfully.
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.865 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.923 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:05:15 compute-0 nova_compute[185480]: 2026-01-27 19:05:15.928 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.932 106898 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.933 106898 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvgdzo1u0/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.778 238834 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.781 238834 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.784 238834 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.784 238834 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238834
Jan 27 19:05:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:15.937 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[9c59d792-a1c1-47ec-ba11-af5b71b9a7fe]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.000 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.002 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540715.829362, b6b280bb-d859-43f3-836a-f93d00510948 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.003 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] VM Paused (Lifecycle Event)
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.027 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.028 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.030 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.031 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.032 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.034 185484 DEBUG nova.virt.libvirt.driver [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.067 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.077 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540715.8419385, b6b280bb-d859-43f3-836a-f93d00510948 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.079 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] VM Resumed (Lifecycle Event)
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.142 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.150 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.256 185484 INFO nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Took 10.63 seconds to spawn the instance on the hypervisor.
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.258 185484 DEBUG nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.317 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:05:16 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:16.447 238834 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:16 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:16.447 238834 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:16 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:16.448 238834 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.491 185484 INFO nova.compute.manager [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Took 11.35 seconds to build instance.
Jan 27 19:05:16 compute-0 nova_compute[185480]: 2026-01-27 19:05:16.536 185484 DEBUG oslo_concurrency.lockutils [None req-05521792-3005-4ff8-9711-fdf40612bde6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:16 compute-0 sshd-session[238840]: Invalid user sol from 45.148.10.240 port 40880
Jan 27 19:05:16 compute-0 sshd-session[238840]: Connection closed by invalid user sol 45.148.10.240 port 40880 [preauth]
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.008 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[48c12a8e-0b55-4152-802f-7b81ac34899b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.011 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f32262d-d1 in ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.013 238834 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f32262d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.013 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[6d94a00a-6a6f-47d9-b612-48992cd36b85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.017 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[859ffd40-ca12-44ef-9581-926b112bd51f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.046 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6df72a-0688-4c0f-91c9-59b05ff01cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.075 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[e6868281-396d-4c84-a906-7e25ce9afd5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.078 106898 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp62opvnzg/privsep.sock']
Jan 27 19:05:17 compute-0 nova_compute[185480]: 2026-01-27 19:05:17.434 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.776 106898 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.777 106898 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp62opvnzg/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.655 238851 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.663 238851 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.667 238851 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.667 238851 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238851
Jan 27 19:05:17 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:17.780 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5465f7-7827-4eea-8ca9-56ce7fbfb324]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:17 compute-0 nova_compute[185480]: 2026-01-27 19:05:17.849 185484 DEBUG nova.compute.manager [req-97f9fb78-7288-4097-9cea-4f179d165a4b req-22192532-3aae-48cf-a5e8-0ff017627f65 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received event network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:05:17 compute-0 nova_compute[185480]: 2026-01-27 19:05:17.850 185484 DEBUG oslo_concurrency.lockutils [req-97f9fb78-7288-4097-9cea-4f179d165a4b req-22192532-3aae-48cf-a5e8-0ff017627f65 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:17 compute-0 nova_compute[185480]: 2026-01-27 19:05:17.852 185484 DEBUG oslo_concurrency.lockutils [req-97f9fb78-7288-4097-9cea-4f179d165a4b req-22192532-3aae-48cf-a5e8-0ff017627f65 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:17 compute-0 nova_compute[185480]: 2026-01-27 19:05:17.853 185484 DEBUG oslo_concurrency.lockutils [req-97f9fb78-7288-4097-9cea-4f179d165a4b req-22192532-3aae-48cf-a5e8-0ff017627f65 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:17 compute-0 nova_compute[185480]: 2026-01-27 19:05:17.854 185484 DEBUG nova.compute.manager [req-97f9fb78-7288-4097-9cea-4f179d165a4b req-22192532-3aae-48cf-a5e8-0ff017627f65 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] No waiting events found dispatching network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:05:17 compute-0 nova_compute[185480]: 2026-01-27 19:05:17.855 185484 WARNING nova.compute.manager [req-97f9fb78-7288-4097-9cea-4f179d165a4b req-22192532-3aae-48cf-a5e8-0ff017627f65 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received unexpected event network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 for instance with vm_state active and task_state None.
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.283 238851 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.283 238851 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.284 238851 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.852 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[ce18f668-c63e-4dd1-a3ff-42e0c087e95a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:18 compute-0 NetworkManager[56191]: <info>  [1769540718.8828] manager: (tap4f32262d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.880 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[a70ffa51-7878-488c-8fe4-1750d648912b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:18 compute-0 systemd-udevd[238863]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.918 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[fe41a033-9978-488a-b472-fba83d30559f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.922 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[42076099-64f6-4c74-babc-bd9583a3e077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:18 compute-0 NetworkManager[56191]: <info>  [1769540718.9518] device (tap4f32262d-d0): carrier: link connected
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.958 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[288c929a-56ab-4c85-9ab3-90fe853b5e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.977 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[5994cf8a-3015-45d3-8829-c7bb858d4ec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f32262d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:bd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383551, 'reachable_time': 20481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238881, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:18 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:18.993 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c55f5b-81df-4080-842e-d65efefdc30e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:bd84'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383551, 'tstamp': 383551}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238882, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.005 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec79386-68b9-4f8c-a563-8c39b12fe508]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f32262d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:bd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383551, 'reachable_time': 20481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238883, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.030 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[d792c2d0-cae3-4e6e-a80b-e86a568be333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:19 compute-0 nova_compute[185480]: 2026-01-27 19:05:19.076 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.108 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[82ca5746-5df4-4673-8f9e-0c2aeb7d4bfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.110 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f32262d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.111 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.112 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f32262d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:05:19 compute-0 NetworkManager[56191]: <info>  [1769540719.1162] manager: (tap4f32262d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 27 19:05:19 compute-0 kernel: tap4f32262d-d0: entered promiscuous mode
Jan 27 19:05:19 compute-0 nova_compute[185480]: 2026-01-27 19:05:19.115 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.124 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f32262d-d0, col_values=(('external_ids', {'iface-id': '5950ebf0-6d13-4405-b07d-fec152665bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:05:19 compute-0 nova_compute[185480]: 2026-01-27 19:05:19.123 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:19 compute-0 ovn_controller[97647]: 2026-01-27T19:05:19Z|00031|binding|INFO|Releasing lport 5950ebf0-6d13-4405-b07d-fec152665bda from this chassis (sb_readonly=0)
Jan 27 19:05:19 compute-0 nova_compute[185480]: 2026-01-27 19:05:19.126 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:19 compute-0 nova_compute[185480]: 2026-01-27 19:05:19.156 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.156 106898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f32262d-dee8-406b-8a5a-09e95f48c8d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f32262d-dee8-406b-8a5a-09e95f48c8d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.158 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[964d4bad-dc53-496f-af50-cbde1b38bbad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:05:19 compute-0 nova_compute[185480]: 2026-01-27 19:05:19.158 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.160 106898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: global
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     log         /dev/log local0 debug
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     log-tag     haproxy-metadata-proxy-4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     user        root
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     group       root
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     maxconn     1024
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     pidfile     /var/lib/neutron/external/pids/4f32262d-dee8-406b-8a5a-09e95f48c8d5.pid.haproxy
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     daemon
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: defaults
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     log global
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     mode http
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     option httplog
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     option dontlognull
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     option http-server-close
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     option forwardfor
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     retries                 3
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     timeout http-request    30s
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     timeout connect         30s
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     timeout client          32s
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     timeout server          32s
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     timeout http-keep-alive 30s
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: listen listener
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     bind 169.254.169.254:80
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:     http-request add-header X-OVN-Network-ID 4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 19:05:19 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:19.162 106898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'env', 'PROCESS_TAG=haproxy-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f32262d-dee8-406b-8a5a-09e95f48c8d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 19:05:19 compute-0 podman[238916]: 2026-01-27 19:05:19.71751873 +0000 UTC m=+0.087346509 container create 94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 19:05:19 compute-0 systemd[1]: Started libpod-conmon-94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd.scope.
Jan 27 19:05:19 compute-0 podman[238916]: 2026-01-27 19:05:19.681655772 +0000 UTC m=+0.051483601 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 19:05:19 compute-0 systemd[1]: Started libcrun container.
Jan 27 19:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23d5e1cd4ce04d9e99883c89e44a7aeeb78c92d34b779e9856e86c91a9c6e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 19:05:19 compute-0 podman[238916]: 2026-01-27 19:05:19.820591903 +0000 UTC m=+0.190419722 container init 94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 19:05:19 compute-0 podman[238916]: 2026-01-27 19:05:19.827080422 +0000 UTC m=+0.196908221 container start 94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 19:05:19 compute-0 neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5[238931]: [NOTICE]   (238935) : New worker (238937) forked
Jan 27 19:05:19 compute-0 neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5[238931]: [NOTICE]   (238935) : Loading success.
Jan 27 19:05:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:20.508 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:20.509 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:05:20.510 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:22 compute-0 nova_compute[185480]: 2026-01-27 19:05:22.437 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:24 compute-0 nova_compute[185480]: 2026-01-27 19:05:24.079 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:27 compute-0 nova_compute[185480]: 2026-01-27 19:05:27.440 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:28 compute-0 ovn_controller[97647]: 2026-01-27T19:05:28Z|00032|binding|INFO|Releasing lport 5950ebf0-6d13-4405-b07d-fec152665bda from this chassis (sb_readonly=0)
Jan 27 19:05:28 compute-0 nova_compute[185480]: 2026-01-27 19:05:28.961 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <info>  [1769540728.9664] manager: (patch-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <info>  [1769540728.9705] device (patch-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <warn>  [1769540728.9710] device (patch-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <info>  [1769540728.9842] manager: (patch-br-int-to-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <info>  [1769540728.9878] device (patch-br-int-to-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <warn>  [1769540728.9879] device (patch-br-int-to-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <info>  [1769540728.9917] manager: (patch-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <info>  [1769540728.9922] manager: (patch-br-int-to-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <info>  [1769540728.9928] device (patch-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 19:05:28 compute-0 NetworkManager[56191]: <info>  [1769540728.9931] device (patch-br-int-to-provnet-a840d17c-fdee-47a6-ad2f-5737dc1a5af4)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 19:05:28 compute-0 ovn_controller[97647]: 2026-01-27T19:05:28Z|00033|binding|INFO|Releasing lport 5950ebf0-6d13-4405-b07d-fec152665bda from this chassis (sb_readonly=0)
Jan 27 19:05:28 compute-0 nova_compute[185480]: 2026-01-27 19:05:28.994 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:29 compute-0 nova_compute[185480]: 2026-01-27 19:05:29.001 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:29 compute-0 nova_compute[185480]: 2026-01-27 19:05:29.082 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:29 compute-0 podman[238949]: 2026-01-27 19:05:29.325410808 +0000 UTC m=+0.089927754 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 19:05:29 compute-0 podman[238948]: 2026-01-27 19:05:29.347650426 +0000 UTC m=+0.104542365 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:05:29 compute-0 podman[201378]: time="2026-01-27T19:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:05:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:05:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Jan 27 19:05:30 compute-0 nova_compute[185480]: 2026-01-27 19:05:30.027 185484 DEBUG nova.compute.manager [req-934dbc22-e3fa-4611-9c07-24a77e04ebf2 req-1624660f-c6ec-401d-8bb0-cdf4f49ba5e4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received event network-changed-b7e20f48-5e15-4381-8111-2bbf9ae03610 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:05:30 compute-0 nova_compute[185480]: 2026-01-27 19:05:30.028 185484 DEBUG nova.compute.manager [req-934dbc22-e3fa-4611-9c07-24a77e04ebf2 req-1624660f-c6ec-401d-8bb0-cdf4f49ba5e4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Refreshing instance network info cache due to event network-changed-b7e20f48-5e15-4381-8111-2bbf9ae03610. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:05:30 compute-0 nova_compute[185480]: 2026-01-27 19:05:30.028 185484 DEBUG oslo_concurrency.lockutils [req-934dbc22-e3fa-4611-9c07-24a77e04ebf2 req-1624660f-c6ec-401d-8bb0-cdf4f49ba5e4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:05:30 compute-0 nova_compute[185480]: 2026-01-27 19:05:30.028 185484 DEBUG oslo_concurrency.lockutils [req-934dbc22-e3fa-4611-9c07-24a77e04ebf2 req-1624660f-c6ec-401d-8bb0-cdf4f49ba5e4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:05:30 compute-0 nova_compute[185480]: 2026-01-27 19:05:30.029 185484 DEBUG nova.network.neutron [req-934dbc22-e3fa-4611-9c07-24a77e04ebf2 req-1624660f-c6ec-401d-8bb0-cdf4f49ba5e4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Refreshing network info cache for port b7e20f48-5e15-4381-8111-2bbf9ae03610 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:05:31 compute-0 podman[238989]: 2026-01-27 19:05:31.30254639 +0000 UTC m=+0.085153207 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, release=1214.1726694543)
Jan 27 19:05:31 compute-0 openstack_network_exporter[204477]: ERROR   19:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:05:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:05:31 compute-0 openstack_network_exporter[204477]: ERROR   19:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:05:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:05:31 compute-0 nova_compute[185480]: 2026-01-27 19:05:31.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:31 compute-0 nova_compute[185480]: 2026-01-27 19:05:31.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 19:05:31 compute-0 nova_compute[185480]: 2026-01-27 19:05:31.532 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 19:05:31 compute-0 nova_compute[185480]: 2026-01-27 19:05:31.795 185484 DEBUG nova.network.neutron [req-934dbc22-e3fa-4611-9c07-24a77e04ebf2 req-1624660f-c6ec-401d-8bb0-cdf4f49ba5e4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated VIF entry in instance network info cache for port b7e20f48-5e15-4381-8111-2bbf9ae03610. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:05:31 compute-0 nova_compute[185480]: 2026-01-27 19:05:31.796 185484 DEBUG nova.network.neutron [req-934dbc22-e3fa-4611-9c07-24a77e04ebf2 req-1624660f-c6ec-401d-8bb0-cdf4f49ba5e4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:05:31 compute-0 nova_compute[185480]: 2026-01-27 19:05:31.900 185484 DEBUG oslo_concurrency.lockutils [req-934dbc22-e3fa-4611-9c07-24a77e04ebf2 req-1624660f-c6ec-401d-8bb0-cdf4f49ba5e4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:05:32 compute-0 nova_compute[185480]: 2026-01-27 19:05:32.443 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:34 compute-0 nova_compute[185480]: 2026-01-27 19:05:34.086 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:36 compute-0 podman[239008]: 2026-01-27 19:05:36.353606178 +0000 UTC m=+0.118083047 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6)
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.192 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.216 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Triggering sync for uuid b6b280bb-d859-43f3-836a-f93d00510948 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.217 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.218 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "b6b280bb-d859-43f3-836a-f93d00510948" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.261 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "b6b280bb-d859-43f3-836a-f93d00510948" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.445 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.517 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.744 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.745 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.746 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:05:37 compute-0 nova_compute[185480]: 2026-01-27 19:05:37.746 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:05:39 compute-0 nova_compute[185480]: 2026-01-27 19:05:39.091 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.894 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.922 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.923 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.924 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.924 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.925 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.925 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.958 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.959 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.961 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:40 compute-0 nova_compute[185480]: 2026-01-27 19:05:40.961 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.065 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.163 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.165 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.236 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.238 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.315 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.317 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.393 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.753 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.755 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5254MB free_disk=72.44681167602539GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.755 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:05:41 compute-0 nova_compute[185480]: 2026-01-27 19:05:41.756 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.032 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.034 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.034 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.121 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing inventories for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.228 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating ProviderTree inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.229 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.249 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing aggregate associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.275 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing trait associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, traits: HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.333 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.374 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updated inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.374 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.374 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.398 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.398 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.399 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.399 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 19:05:42 compute-0 nova_compute[185480]: 2026-01-27 19:05:42.449 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:43 compute-0 nova_compute[185480]: 2026-01-27 19:05:43.000 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:43 compute-0 nova_compute[185480]: 2026-01-27 19:05:43.001 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:43 compute-0 nova_compute[185480]: 2026-01-27 19:05:43.023 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:43 compute-0 nova_compute[185480]: 2026-01-27 19:05:43.023 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:43 compute-0 podman[239041]: 2026-01-27 19:05:43.369175579 +0000 UTC m=+0.139286129 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:05:43 compute-0 nova_compute[185480]: 2026-01-27 19:05:43.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:44 compute-0 nova_compute[185480]: 2026-01-27 19:05:44.095 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:44 compute-0 nova_compute[185480]: 2026-01-27 19:05:44.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:44 compute-0 nova_compute[185480]: 2026-01-27 19:05:44.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:05:45 compute-0 sshd-session[239062]: Received disconnect from 91.224.92.78 port 36148:11:  [preauth]
Jan 27 19:05:45 compute-0 podman[239064]: 2026-01-27 19:05:45.318773123 +0000 UTC m=+0.095965933 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:05:45 compute-0 sshd-session[239062]: Disconnected from authenticating user root 91.224.92.78 port 36148 [preauth]
Jan 27 19:05:45 compute-0 podman[239066]: 2026-01-27 19:05:45.336975721 +0000 UTC m=+0.089218016 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:05:45 compute-0 podman[239065]: 2026-01-27 19:05:45.36620599 +0000 UTC m=+0.134734297 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 19:05:47 compute-0 nova_compute[185480]: 2026-01-27 19:05:47.452 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:48 compute-0 ovn_controller[97647]: 2026-01-27T19:05:48Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:d9:f9 192.168.0.162
Jan 27 19:05:48 compute-0 ovn_controller[97647]: 2026-01-27T19:05:48Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:d9:f9 192.168.0.162
Jan 27 19:05:49 compute-0 nova_compute[185480]: 2026-01-27 19:05:49.100 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:52 compute-0 nova_compute[185480]: 2026-01-27 19:05:52.452 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:54 compute-0 nova_compute[185480]: 2026-01-27 19:05:54.105 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:57 compute-0 nova_compute[185480]: 2026-01-27 19:05:57.456 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:58 compute-0 ovn_controller[97647]: 2026-01-27T19:05:58Z|00034|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Jan 27 19:05:59 compute-0 nova_compute[185480]: 2026-01-27 19:05:59.109 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:05:59 compute-0 podman[201378]: time="2026-01-27T19:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:05:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:05:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4355 "" "Go-http-client/1.1"
Jan 27 19:06:00 compute-0 podman[239141]: 2026-01-27 19:06:00.318620507 +0000 UTC m=+0.095804230 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:06:00 compute-0 podman[239142]: 2026-01-27 19:06:00.349527317 +0000 UTC m=+0.114648763 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 27 19:06:01 compute-0 openstack_network_exporter[204477]: ERROR   19:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:06:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:06:01 compute-0 openstack_network_exporter[204477]: ERROR   19:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:06:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:06:02 compute-0 podman[239182]: 2026-01-27 19:06:02.299356566 +0000 UTC m=+0.082537632 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.expose-services=, distribution-scope=public, name=ubi9, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Jan 27 19:06:02 compute-0 nova_compute[185480]: 2026-01-27 19:06:02.461 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:04 compute-0 nova_compute[185480]: 2026-01-27 19:06:04.112 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:07 compute-0 podman[239201]: 2026-01-27 19:06:07.328650321 +0000 UTC m=+0.108872981 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc.)
Jan 27 19:06:07 compute-0 nova_compute[185480]: 2026-01-27 19:06:07.463 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:09 compute-0 nova_compute[185480]: 2026-01-27 19:06:09.115 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:12 compute-0 nova_compute[185480]: 2026-01-27 19:06:12.467 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:14 compute-0 nova_compute[185480]: 2026-01-27 19:06:14.118 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:14 compute-0 podman[239221]: 2026-01-27 19:06:14.316925485 +0000 UTC m=+0.096624359 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 19:06:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:15.122 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:06:15 compute-0 nova_compute[185480]: 2026-01-27 19:06:15.124 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:15.127 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:06:16 compute-0 podman[239241]: 2026-01-27 19:06:16.291006362 +0000 UTC m=+0.065505584 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:06:16 compute-0 podman[239243]: 2026-01-27 19:06:16.304771781 +0000 UTC m=+0.072431864 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 19:06:16 compute-0 podman[239242]: 2026-01-27 19:06:16.363814704 +0000 UTC m=+0.138261954 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:06:17 compute-0 nova_compute[185480]: 2026-01-27 19:06:17.468 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:19 compute-0 nova_compute[185480]: 2026-01-27 19:06:19.123 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:20.508 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:20.509 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:20.509 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.091 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.092 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.110 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.206 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.207 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.218 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.219 185484 INFO nova.compute.claims [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.451 185484 DEBUG nova.compute.provider_tree [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.468 185484 DEBUG nova.scheduler.client.report [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.472 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.503 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.504 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.557 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.558 185484 DEBUG nova.network.neutron [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.588 185484 INFO nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.635 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.751 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.753 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.753 185484 INFO nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Creating image(s)
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.754 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.754 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.755 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.773 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.832 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.834 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.835 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.846 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.914 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.916 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97,backing_fmt=raw /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.957 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97,backing_fmt=raw /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.959 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:22 compute-0 nova_compute[185480]: 2026-01-27 19:06:22.960 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.030 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.032 185484 DEBUG nova.virt.disk.api [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Checking if we can resize image /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.032 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.099 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.101 185484 DEBUG nova.virt.disk.api [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Cannot resize image /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.101 185484 DEBUG nova.objects.instance [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 41f46cfd-06bf-4ef6-85a3-cc6e8629637e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.119 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.120 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.121 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.135 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.195 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.196 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.197 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.212 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.277 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.278 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.334 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.335 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.335 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.402 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.403 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.404 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Ensure instance console log exists: /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.404 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.405 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.405 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.667 185484 DEBUG nova.network.neutron [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Successfully updated port: 1447625c-00ab-407e-94d6-83dc67aba59c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.721 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.721 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquired lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.722 185484 DEBUG nova.network.neutron [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.796 185484 DEBUG nova.compute.manager [req-847a374e-6e25-457b-aa0b-d898337c166e req-217cdfa3-b8c1-40de-885a-99a720e15f53 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received event network-changed-1447625c-00ab-407e-94d6-83dc67aba59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.797 185484 DEBUG nova.compute.manager [req-847a374e-6e25-457b-aa0b-d898337c166e req-217cdfa3-b8c1-40de-885a-99a720e15f53 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Refreshing instance network info cache due to event network-changed-1447625c-00ab-407e-94d6-83dc67aba59c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.797 185484 DEBUG oslo_concurrency.lockutils [req-847a374e-6e25-457b-aa0b-d898337c166e req-217cdfa3-b8c1-40de-885a-99a720e15f53 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:06:23 compute-0 nova_compute[185480]: 2026-01-27 19:06:23.954 185484 DEBUG nova.network.neutron [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:06:24 compute-0 nova_compute[185480]: 2026-01-27 19:06:24.126 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:24 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:24.132 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.556 185484 DEBUG nova.network.neutron [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updating instance_info_cache with network_info: [{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.670 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Releasing lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.670 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Instance network_info: |[{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.670 185484 DEBUG oslo_concurrency.lockutils [req-847a374e-6e25-457b-aa0b-d898337c166e req-217cdfa3-b8c1-40de-885a-99a720e15f53 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.671 185484 DEBUG nova.network.neutron [req-847a374e-6e25-457b-aa0b-d898337c166e req-217cdfa3-b8c1-40de-885a-99a720e15f53 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Refreshing network info cache for port 1447625c-00ab-407e-94d6-83dc67aba59c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.673 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Start _get_guest_xml network_info=[{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:03:35Z,direct_url=<?>,disk_format='qcow2',id=525193b7-cb5a-4d63-9747-3b917622bbe3,min_disk=0,min_ram=0,name='cirros',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:03:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.682 185484 WARNING nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.688 185484 DEBUG nova.virt.libvirt.host [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.689 185484 DEBUG nova.virt.libvirt.host [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.695 185484 DEBUG nova.virt.libvirt.host [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.695 185484 DEBUG nova.virt.libvirt.host [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.696 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.696 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:03:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc7c8c58-0a2b-4396-9f89-7ff8e35afa36',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:03:35Z,direct_url=<?>,disk_format='qcow2',id=525193b7-cb5a-4d63-9747-3b917622bbe3,min_disk=0,min_ram=0,name='cirros',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:03:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.697 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.697 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.698 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.698 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.699 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.699 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.699 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.700 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.700 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.700 185484 DEBUG nova.virt.hardware [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.707 185484 DEBUG nova.virt.libvirt.vif [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:06:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d',id=2,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-fernbz5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:06:22Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODk1MjQwNjY4NTM3NjY2MTIyMz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 19:06:25 compute-0 nova_compute[185480]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODk1MjQwNjY4NTM3NjY2MTIyMz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=41f46cfd-06bf-4ef6-85a3-cc6e8629637e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.707 185484 DEBUG nova.network.os_vif_util [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.708 185484 DEBUG nova.network.os_vif_util [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:f3:b6,bridge_name='br-int',has_traffic_filtering=True,id=1447625c-00ab-407e-94d6-83dc67aba59c,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1447625c-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.710 185484 DEBUG nova.objects.instance [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 41f46cfd-06bf-4ef6-85a3-cc6e8629637e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.773 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <uuid>41f46cfd-06bf-4ef6-85a3-cc6e8629637e</uuid>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <name>instance-00000002</name>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <memory>524288</memory>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <nova:name>vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d</nova:name>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:06:25</nova:creationTime>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <nova:flavor name="m1.small">
Jan 27 19:06:25 compute-0 nova_compute[185480]:         <nova:memory>512</nova:memory>
Jan 27 19:06:25 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:06:25 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:06:25 compute-0 nova_compute[185480]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 19:06:25 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:06:25 compute-0 nova_compute[185480]:         <nova:user uuid="6d30d46dc88a4403b3a241949384d8f7">admin</nova:user>
Jan 27 19:06:25 compute-0 nova_compute[185480]:         <nova:project uuid="f04ec1493db14ca1adbb4b6abd1667b1">admin</nova:project>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="525193b7-cb5a-4d63-9747-3b917622bbe3"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:06:25 compute-0 nova_compute[185480]:         <nova:port uuid="1447625c-00ab-407e-94d6-83dc67aba59c">
Jan 27 19:06:25 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="192.168.0.54" ipVersion="4"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <system>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <entry name="serial">41f46cfd-06bf-4ef6-85a3-cc6e8629637e</entry>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <entry name="uuid">41f46cfd-06bf-4ef6-85a3-cc6e8629637e</entry>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </system>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <os>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   </os>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <features>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   </features>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <target dev="vdb" bus="virtio"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.config"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:6c:f3:b6"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <target dev="tap1447625c-00"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/console.log" append="off"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <video>
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </video>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:06:25 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:06:25 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:06:25 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:06:25 compute-0 nova_compute[185480]: </domain>
Jan 27 19:06:25 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.774 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Preparing to wait for external event network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.774 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.774 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.775 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.776 185484 DEBUG nova.virt.libvirt.vif [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:06:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d',id=2,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-fernbz5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:06:22Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODk1MjQwNjY4NTM3NjY2MTIyMz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 27 19:06:25 compute-0 nova_compute[185480]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODk1MjQwNjY4NTM3NjY2MTIyMz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=41f46cfd-06bf-4ef6-85a3-cc6e8629637e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.776 185484 DEBUG nova.network.os_vif_util [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.777 185484 DEBUG nova.network.os_vif_util [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:f3:b6,bridge_name='br-int',has_traffic_filtering=True,id=1447625c-00ab-407e-94d6-83dc67aba59c,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1447625c-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.778 185484 DEBUG os_vif [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:f3:b6,bridge_name='br-int',has_traffic_filtering=True,id=1447625c-00ab-407e-94d6-83dc67aba59c,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1447625c-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.779 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.779 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.780 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.784 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.784 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1447625c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.785 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1447625c-00, col_values=(('external_ids', {'iface-id': '1447625c-00ab-407e-94d6-83dc67aba59c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:f3:b6', 'vm-uuid': '41f46cfd-06bf-4ef6-85a3-cc6e8629637e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.787 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:25 compute-0 NetworkManager[56191]: <info>  [1769540785.7882] manager: (tap1447625c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.790 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.795 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.796 185484 INFO os_vif [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:f3:b6,bridge_name='br-int',has_traffic_filtering=True,id=1447625c-00ab-407e-94d6-83dc67aba59c,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1447625c-00')
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.905 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.905 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.905 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.906 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No VIF found with MAC fa:16:3e:6c:f3:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:06:25 compute-0 nova_compute[185480]: 2026-01-27 19:06:25.906 185484 INFO nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Using config drive
Jan 27 19:06:25 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:06:25.707 185484 DEBUG nova.virt.libvirt.vif [None req-427fa384-13 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:06:25 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:06:25.776 185484 DEBUG nova.virt.libvirt.vif [None req-427fa384-13 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:06:27 compute-0 nova_compute[185480]: 2026-01-27 19:06:27.472 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:27 compute-0 nova_compute[185480]: 2026-01-27 19:06:27.815 185484 INFO nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Creating config drive at /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.config
Jan 27 19:06:27 compute-0 nova_compute[185480]: 2026-01-27 19:06:27.828 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywj_4gvn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:27 compute-0 nova_compute[185480]: 2026-01-27 19:06:27.955 185484 DEBUG oslo_concurrency.processutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywj_4gvn" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:28 compute-0 kernel: tap1447625c-00: entered promiscuous mode
Jan 27 19:06:28 compute-0 NetworkManager[56191]: <info>  [1769540788.0381] manager: (tap1447625c-00): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 27 19:06:28 compute-0 ovn_controller[97647]: 2026-01-27T19:06:28Z|00035|binding|INFO|Claiming lport 1447625c-00ab-407e-94d6-83dc67aba59c for this chassis.
Jan 27 19:06:28 compute-0 ovn_controller[97647]: 2026-01-27T19:06:28Z|00036|binding|INFO|1447625c-00ab-407e-94d6-83dc67aba59c: Claiming fa:16:3e:6c:f3:b6 192.168.0.54
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.040 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.058 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:f3:b6 192.168.0.54'], port_security=['fa:16:3e:6c:f3:b6 192.168.0.54'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-z6txynvcwoqi-27d46klag4r3-odvujbh3kmzr-port-r2mhxscfwr32', 'neutron:cidrs': '192.168.0.54/24', 'neutron:device_id': '41f46cfd-06bf-4ef6-85a3-cc6e8629637e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-z6txynvcwoqi-27d46klag4r3-odvujbh3kmzr-port-r2mhxscfwr32', 'neutron:project_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a99badb-bb64-4e2f-95a8-78f317eb6676', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33ebbec8-56f4-45ac-84a6-f80dd4a7c167, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=1447625c-00ab-407e-94d6-83dc67aba59c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.060 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 1447625c-00ab-407e-94d6-83dc67aba59c in datapath 4f32262d-dee8-406b-8a5a-09e95f48c8d5 bound to our chassis
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.061 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.071 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:28 compute-0 ovn_controller[97647]: 2026-01-27T19:06:28Z|00037|binding|INFO|Setting lport 1447625c-00ab-407e-94d6-83dc67aba59c ovn-installed in OVS
Jan 27 19:06:28 compute-0 ovn_controller[97647]: 2026-01-27T19:06:28Z|00038|binding|INFO|Setting lport 1447625c-00ab-407e-94d6-83dc67aba59c up in Southbound
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.077 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.081 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[23965e54-5f5c-43a6-a633-cbcb4e91ad19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:06:28 compute-0 systemd-machined[156762]: New machine qemu-2-instance-00000002.
Jan 27 19:06:28 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.112 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[0c65738e-fb46-41e3-95f1-bada31ce5a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.115 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[11ac2dc3-fb50-495a-92a8-a51804d40f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:06:28 compute-0 systemd-udevd[239358]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.145 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb82d09-9272-477e-b0b1-c9b674be21be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:06:28 compute-0 NetworkManager[56191]: <info>  [1769540788.1498] device (tap1447625c-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:06:28 compute-0 NetworkManager[56191]: <info>  [1769540788.1542] device (tap1447625c-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.163 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6a3c56-fcf3-486e-b914-24cd30e1d7c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f32262d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:bd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383551, 'reachable_time': 20481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239364, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.179 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[640bd644-ee83-4a6d-a91b-918cf7b62c6f]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383562, 'tstamp': 383562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239367, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383566, 'tstamp': 383566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239367, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.181 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f32262d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.183 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.185 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f32262d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.185 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.186 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f32262d-d0, col_values=(('external_ids', {'iface-id': '5950ebf0-6d13-4405-b07d-fec152665bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:06:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:06:28.187 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.422 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540788.4219282, 41f46cfd-06bf-4ef6-85a3-cc6e8629637e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.422 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] VM Started (Lifecycle Event)
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.451 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.459 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540788.4220347, 41f46cfd-06bf-4ef6-85a3-cc6e8629637e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.460 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] VM Paused (Lifecycle Event)
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.484 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.490 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:06:28 compute-0 nova_compute[185480]: 2026-01-27 19:06:28.513 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.015 185484 DEBUG nova.compute.manager [req-c7074b5a-64ca-4c81-9ed3-27500b1a6455 req-e62efbd0-f04e-4b7e-a349-30500217759c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received event network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.016 185484 DEBUG oslo_concurrency.lockutils [req-c7074b5a-64ca-4c81-9ed3-27500b1a6455 req-e62efbd0-f04e-4b7e-a349-30500217759c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.016 185484 DEBUG oslo_concurrency.lockutils [req-c7074b5a-64ca-4c81-9ed3-27500b1a6455 req-e62efbd0-f04e-4b7e-a349-30500217759c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.017 185484 DEBUG oslo_concurrency.lockutils [req-c7074b5a-64ca-4c81-9ed3-27500b1a6455 req-e62efbd0-f04e-4b7e-a349-30500217759c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.017 185484 DEBUG nova.compute.manager [req-c7074b5a-64ca-4c81-9ed3-27500b1a6455 req-e62efbd0-f04e-4b7e-a349-30500217759c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Processing event network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.018 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.022 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540789.0219283, 41f46cfd-06bf-4ef6-85a3-cc6e8629637e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.023 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] VM Resumed (Lifecycle Event)
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.029 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.034 185484 INFO nova.virt.libvirt.driver [-] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Instance spawned successfully.
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.034 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.047 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.059 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.066 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.067 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.067 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.068 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.068 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.069 185484 DEBUG nova.virt.libvirt.driver [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.092 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.121 185484 INFO nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Took 6.37 seconds to spawn the instance on the hypervisor.
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.121 185484 DEBUG nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.173 185484 INFO nova.compute.manager [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Took 7.00 seconds to build instance.
Jan 27 19:06:29 compute-0 nova_compute[185480]: 2026-01-27 19:06:29.188 185484 DEBUG oslo_concurrency.lockutils [None req-427fa384-13f2-431a-8acc-98f7888e98e6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:29 compute-0 podman[201378]: time="2026-01-27T19:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:06:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:06:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4359 "" "Go-http-client/1.1"
Jan 27 19:06:30 compute-0 nova_compute[185480]: 2026-01-27 19:06:30.789 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:30 compute-0 nova_compute[185480]: 2026-01-27 19:06:30.846 185484 DEBUG nova.network.neutron [req-847a374e-6e25-457b-aa0b-d898337c166e req-217cdfa3-b8c1-40de-885a-99a720e15f53 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updated VIF entry in instance network info cache for port 1447625c-00ab-407e-94d6-83dc67aba59c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:06:30 compute-0 nova_compute[185480]: 2026-01-27 19:06:30.847 185484 DEBUG nova.network.neutron [req-847a374e-6e25-457b-aa0b-d898337c166e req-217cdfa3-b8c1-40de-885a-99a720e15f53 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updating instance_info_cache with network_info: [{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:06:30 compute-0 nova_compute[185480]: 2026-01-27 19:06:30.865 185484 DEBUG oslo_concurrency.lockutils [req-847a374e-6e25-457b-aa0b-d898337c166e req-217cdfa3-b8c1-40de-885a-99a720e15f53 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:06:31 compute-0 nova_compute[185480]: 2026-01-27 19:06:31.258 185484 DEBUG nova.compute.manager [req-400d9a06-ca5d-485c-be3e-86f49846276a req-3f12a316-9fcb-45fd-ad55-28e35f792ee9 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received event network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:06:31 compute-0 nova_compute[185480]: 2026-01-27 19:06:31.259 185484 DEBUG oslo_concurrency.lockutils [req-400d9a06-ca5d-485c-be3e-86f49846276a req-3f12a316-9fcb-45fd-ad55-28e35f792ee9 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:31 compute-0 nova_compute[185480]: 2026-01-27 19:06:31.259 185484 DEBUG oslo_concurrency.lockutils [req-400d9a06-ca5d-485c-be3e-86f49846276a req-3f12a316-9fcb-45fd-ad55-28e35f792ee9 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:31 compute-0 nova_compute[185480]: 2026-01-27 19:06:31.259 185484 DEBUG oslo_concurrency.lockutils [req-400d9a06-ca5d-485c-be3e-86f49846276a req-3f12a316-9fcb-45fd-ad55-28e35f792ee9 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:31 compute-0 nova_compute[185480]: 2026-01-27 19:06:31.259 185484 DEBUG nova.compute.manager [req-400d9a06-ca5d-485c-be3e-86f49846276a req-3f12a316-9fcb-45fd-ad55-28e35f792ee9 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] No waiting events found dispatching network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:06:31 compute-0 nova_compute[185480]: 2026-01-27 19:06:31.259 185484 WARNING nova.compute.manager [req-400d9a06-ca5d-485c-be3e-86f49846276a req-3f12a316-9fcb-45fd-ad55-28e35f792ee9 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received unexpected event network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c for instance with vm_state active and task_state None.
Jan 27 19:06:31 compute-0 podman[239377]: 2026-01-27 19:06:31.335269104 +0000 UTC m=+0.094691732 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 19:06:31 compute-0 podman[239376]: 2026-01-27 19:06:31.357660385 +0000 UTC m=+0.116639382 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:06:31 compute-0 openstack_network_exporter[204477]: ERROR   19:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:06:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:06:31 compute-0 openstack_network_exporter[204477]: ERROR   19:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:06:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.094 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.095 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.102 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b6b280bb-d859-43f3-836a-f93d00510948 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:06:32 compute-0 nova_compute[185480]: 2026-01-27 19:06:32.476 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.557 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b6b280bb-d859-43f3-836a-f93d00510948 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.896 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Tue, 27 Jan 2026 19:06:32 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f02a9c1a-baeb-4c28-a8d3-d07aa66af0d0 x-openstack-request-id: req-f02a9c1a-baeb-4c28-a8d3-d07aa66af0d0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.896 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b6b280bb-d859-43f3-836a-f93d00510948", "name": "test_0", "status": "ACTIVE", "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "user_id": "6d30d46dc88a4403b3a241949384d8f7", "metadata": {}, "hostId": "d5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0", "image": {"id": "525193b7-cb5a-4d63-9747-3b917622bbe3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/525193b7-cb5a-4d63-9747-3b917622bbe3"}]}, "flavor": {"id": "bc7c8c58-0a2b-4396-9f89-7ff8e35afa36", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc7c8c58-0a2b-4396-9f89-7ff8e35afa36"}]}, "created": "2026-01-27T19:05:01Z", "updated": "2026-01-27T19:05:16Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.162", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:74:d9:f9"}, {"version": 4, "addr": "192.168.122.183", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:74:d9:f9"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b6b280bb-d859-43f3-836a-f93d00510948"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b6b280bb-d859-43f3-836a-f93d00510948"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T19:05:16.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.896 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b6b280bb-d859-43f3-836a-f93d00510948 used request id req-f02a9c1a-baeb-4c28-a8d3-d07aa66af0d0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.899 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'name': 'test_0', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.903 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:06:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:32.904 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/41f46cfd-06bf-4ef6-85a3-cc6e8629637e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.252 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 27 Jan 2026 19:06:32 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-03fe9f2a-d7a7-4478-b5de-44bdf6c323e4 x-openstack-request-id: req-03fe9f2a-d7a7-4478-b5de-44bdf6c323e4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.252 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "41f46cfd-06bf-4ef6-85a3-cc6e8629637e", "name": "vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d", "status": "ACTIVE", "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "user_id": "6d30d46dc88a4403b3a241949384d8f7", "metadata": {"metering.server_group": "bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871"}, "hostId": "d5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0", "image": {"id": "525193b7-cb5a-4d63-9747-3b917622bbe3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/525193b7-cb5a-4d63-9747-3b917622bbe3"}]}, "flavor": {"id": "bc7c8c58-0a2b-4396-9f89-7ff8e35afa36", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc7c8c58-0a2b-4396-9f89-7ff8e35afa36"}]}, "created": "2026-01-27T19:06:20Z", "updated": "2026-01-27T19:06:29Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.54", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6c:f3:b6"}, {"version": 4, "addr": "192.168.122.232", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6c:f3:b6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/41f46cfd-06bf-4ef6-85a3-cc6e8629637e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/41f46cfd-06bf-4ef6-85a3-cc6e8629637e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T19:06:29.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.252 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/41f46cfd-06bf-4ef6-85a3-cc6e8629637e used request id req-03fe9f2a-d7a7-4478-b5de-44bdf6c323e4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.254 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41f46cfd-06bf-4ef6-85a3-cc6e8629637e', 'name': 'vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.254 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.254 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.255 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.256 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:06:33.254688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.282 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 21766144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.283 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.283 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.305 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.306 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.306 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.308 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:06:33.309032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 podman[239420]: 2026-01-27 19:06:33.336837148 +0000 UTC m=+0.105859217 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=kepler, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., architecture=x86_64, version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9)
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.343 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.364 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.364 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e: ceilometer.compute.pollsters.NoVolumeException
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.365 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.365 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.366 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:06:33.365393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.366 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.366 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:06:33.366962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.370 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b6b280bb-d859-43f3-836a-f93d00510948 / tapb7e20f48-5e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.370 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.373 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 41f46cfd-06bf-4ef6-85a3-cc6e8629637e / tap1447625c-00 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.374 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.374 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:06:33.374808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.430 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.431 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.431 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.497 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.497 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.498 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.499 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.500 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.500 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.501 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:06:33.499834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.502 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:06:33.502631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.503 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.504 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:06:33.505286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.505 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.505 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.506 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.506 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.506 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.507 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.508 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.508 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/cpu volume: 33110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.509 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/cpu volume: 4210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.509 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:06:33.508630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.510 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:06:33.510435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.511 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.511 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.511 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.512 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.512 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.513 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.514 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:06:33.514130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.515 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.516 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.516 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.516 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 713872381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.517 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 102610265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.517 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 79720785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.518 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 595709634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:06:33.516579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.518 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.519 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 3932907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.520 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:06:33.521199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.521 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.522 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.523 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.523 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.523 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.524 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.525 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:06:33.523604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.526 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.526 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.526 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.527 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.527 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.528 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.529 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:06:33.525886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.530 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes volume: 2104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.530 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.531 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:06:33.530013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.532 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 2565149834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.533 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 13830550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.533 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.534 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.535 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.535 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:06:33.532616) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.537 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:06:33.537589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.538 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.538 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.539 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.540 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.540 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.541 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:06:33.540476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.541 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.541 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.542 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.543 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:06:33.543319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.544 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.544 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T19:06:33.544507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.545 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d>]
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.546 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.546 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.546 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.546 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:06:33.546331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.547 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.548 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:06:33.548118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.549 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:06:33.549437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.549 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.550 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.550 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.550 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.550 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.551 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.551 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.551 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes volume: 1920 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.552 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.553 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.553 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:06:33.551841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.554 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d>]
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T19:06:33.553516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.554 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.554 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.555 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.555 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.555 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.555 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.555 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.555 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:06:33.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:06:35 compute-0 nova_compute[185480]: 2026-01-27 19:06:35.791 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:37 compute-0 nova_compute[185480]: 2026-01-27 19:06:37.481 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:37 compute-0 nova_compute[185480]: 2026-01-27 19:06:37.531 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:37 compute-0 nova_compute[185480]: 2026-01-27 19:06:37.532 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:06:37 compute-0 nova_compute[185480]: 2026-01-27 19:06:37.532 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:06:37 compute-0 nova_compute[185480]: 2026-01-27 19:06:37.757 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:06:37 compute-0 nova_compute[185480]: 2026-01-27 19:06:37.758 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:06:37 compute-0 nova_compute[185480]: 2026-01-27 19:06:37.759 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:06:37 compute-0 nova_compute[185480]: 2026-01-27 19:06:37.759 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:06:38 compute-0 podman[239441]: 2026-01-27 19:06:38.379254954 +0000 UTC m=+0.143762980 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7)
Jan 27 19:06:39 compute-0 nova_compute[185480]: 2026-01-27 19:06:39.638 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:06:39 compute-0 nova_compute[185480]: 2026-01-27 19:06:39.652 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:06:39 compute-0 nova_compute[185480]: 2026-01-27 19:06:39.653 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:06:39 compute-0 nova_compute[185480]: 2026-01-27 19:06:39.653 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:39 compute-0 nova_compute[185480]: 2026-01-27 19:06:39.653 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.646 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.647 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.647 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.648 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.734 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.792 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.793 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.809 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.869 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.870 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.927 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.928 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:40 compute-0 nova_compute[185480]: 2026-01-27 19:06:40.989 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.002 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.075 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.077 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.178 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.180 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.252 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.253 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.315 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.659 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.660 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5110MB free_disk=72.42273712158203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.660 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.661 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.756 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.756 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.757 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.757 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.828 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.848 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.865 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:06:41 compute-0 nova_compute[185480]: 2026-01-27 19:06:41.865 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:06:42 compute-0 nova_compute[185480]: 2026-01-27 19:06:42.485 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:42 compute-0 nova_compute[185480]: 2026-01-27 19:06:42.861 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:42 compute-0 nova_compute[185480]: 2026-01-27 19:06:42.862 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:43 compute-0 nova_compute[185480]: 2026-01-27 19:06:43.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:43 compute-0 nova_compute[185480]: 2026-01-27 19:06:43.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:44 compute-0 podman[239485]: 2026-01-27 19:06:44.826256638 +0000 UTC m=+0.125868769 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, managed_by=edpm_ansible)
Jan 27 19:06:45 compute-0 nova_compute[185480]: 2026-01-27 19:06:45.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:06:45 compute-0 nova_compute[185480]: 2026-01-27 19:06:45.812 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:47 compute-0 podman[239504]: 2026-01-27 19:06:47.332323907 +0000 UTC m=+0.096668650 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:06:47 compute-0 podman[239505]: 2026-01-27 19:06:47.371851571 +0000 UTC m=+0.131595980 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 19:06:47 compute-0 podman[239506]: 2026-01-27 19:06:47.37995453 +0000 UTC m=+0.122463315 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 19:06:47 compute-0 nova_compute[185480]: 2026-01-27 19:06:47.489 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:50 compute-0 nova_compute[185480]: 2026-01-27 19:06:50.816 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:52 compute-0 nova_compute[185480]: 2026-01-27 19:06:52.492 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:55 compute-0 nova_compute[185480]: 2026-01-27 19:06:55.820 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:57 compute-0 nova_compute[185480]: 2026-01-27 19:06:57.493 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:06:58 compute-0 ovn_controller[97647]: 2026-01-27T19:06:58Z|00039|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 27 19:06:59 compute-0 podman[201378]: time="2026-01-27T19:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:06:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:06:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4352 "" "Go-http-client/1.1"
Jan 27 19:07:00 compute-0 nova_compute[185480]: 2026-01-27 19:07:00.825 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:01 compute-0 openstack_network_exporter[204477]: ERROR   19:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:07:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:07:01 compute-0 openstack_network_exporter[204477]: ERROR   19:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:07:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:07:02 compute-0 podman[239573]: 2026-01-27 19:07:02.311563863 +0000 UTC m=+0.078200866 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 19:07:02 compute-0 podman[239572]: 2026-01-27 19:07:02.312776633 +0000 UTC m=+0.082688436 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:07:02 compute-0 nova_compute[185480]: 2026-01-27 19:07:02.495 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:03 compute-0 ovn_controller[97647]: 2026-01-27T19:07:03Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:f3:b6 192.168.0.54
Jan 27 19:07:03 compute-0 ovn_controller[97647]: 2026-01-27T19:07:03Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:f3:b6 192.168.0.54
Jan 27 19:07:04 compute-0 podman[239627]: 2026-01-27 19:07:04.340460119 +0000 UTC m=+0.115628787 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:07:05 compute-0 nova_compute[185480]: 2026-01-27 19:07:05.829 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:07 compute-0 nova_compute[185480]: 2026-01-27 19:07:07.499 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:09 compute-0 podman[239644]: 2026-01-27 19:07:09.336325431 +0000 UTC m=+0.089717180 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public)
Jan 27 19:07:10 compute-0 nova_compute[185480]: 2026-01-27 19:07:10.832 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:12 compute-0 nova_compute[185480]: 2026-01-27 19:07:12.501 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:15 compute-0 podman[239663]: 2026-01-27 19:07:15.344110156 +0000 UTC m=+0.112906250 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126)
Jan 27 19:07:15 compute-0 nova_compute[185480]: 2026-01-27 19:07:15.837 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:17 compute-0 nova_compute[185480]: 2026-01-27 19:07:17.503 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:18 compute-0 podman[239684]: 2026-01-27 19:07:18.293032135 +0000 UTC m=+0.066291372 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:07:18 compute-0 podman[239686]: 2026-01-27 19:07:18.322231934 +0000 UTC m=+0.082271456 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 19:07:18 compute-0 podman[239685]: 2026-01-27 19:07:18.356875787 +0000 UTC m=+0.118588810 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 19:07:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:07:20.510 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:07:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:07:20.511 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:07:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:07:20.511 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:07:20 compute-0 nova_compute[185480]: 2026-01-27 19:07:20.841 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:22 compute-0 nova_compute[185480]: 2026-01-27 19:07:22.505 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:25 compute-0 nova_compute[185480]: 2026-01-27 19:07:25.847 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:27 compute-0 nova_compute[185480]: 2026-01-27 19:07:27.508 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:29 compute-0 podman[201378]: time="2026-01-27T19:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:07:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:07:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Jan 27 19:07:30 compute-0 nova_compute[185480]: 2026-01-27 19:07:30.852 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:31 compute-0 openstack_network_exporter[204477]: ERROR   19:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:07:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:07:31 compute-0 openstack_network_exporter[204477]: ERROR   19:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:07:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:07:32 compute-0 nova_compute[185480]: 2026-01-27 19:07:32.510 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:33 compute-0 podman[239751]: 2026-01-27 19:07:33.28202688 +0000 UTC m=+0.063540205 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:07:33 compute-0 podman[239752]: 2026-01-27 19:07:33.333392074 +0000 UTC m=+0.110078170 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:07:35 compute-0 podman[239793]: 2026-01-27 19:07:35.289799532 +0000 UTC m=+0.070898149 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., config_id=kepler, distribution-scope=public)
Jan 27 19:07:35 compute-0 sshd-session[239812]: Invalid user sol from 45.148.10.240 port 36778
Jan 27 19:07:35 compute-0 sshd-session[239812]: Connection closed by invalid user sol 45.148.10.240 port 36778 [preauth]
Jan 27 19:07:35 compute-0 nova_compute[185480]: 2026-01-27 19:07:35.854 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:37 compute-0 nova_compute[185480]: 2026-01-27 19:07:37.514 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:37 compute-0 nova_compute[185480]: 2026-01-27 19:07:37.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:37 compute-0 nova_compute[185480]: 2026-01-27 19:07:37.518 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:07:37 compute-0 nova_compute[185480]: 2026-01-27 19:07:37.846 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:07:37 compute-0 nova_compute[185480]: 2026-01-27 19:07:37.847 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:07:37 compute-0 nova_compute[185480]: 2026-01-27 19:07:37.847 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:07:39 compute-0 nova_compute[185480]: 2026-01-27 19:07:39.884 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updating instance_info_cache with network_info: [{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:07:39 compute-0 nova_compute[185480]: 2026-01-27 19:07:39.955 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:07:39 compute-0 nova_compute[185480]: 2026-01-27 19:07:39.956 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:07:40 compute-0 podman[239815]: 2026-01-27 19:07:40.320069896 +0000 UTC m=+0.084660575 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:07:40 compute-0 nova_compute[185480]: 2026-01-27 19:07:40.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:40 compute-0 nova_compute[185480]: 2026-01-27 19:07:40.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:40 compute-0 nova_compute[185480]: 2026-01-27 19:07:40.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:07:40 compute-0 nova_compute[185480]: 2026-01-27 19:07:40.858 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.540 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.541 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.541 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.542 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.750 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.809 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.813 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.874 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.876 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.941 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:07:41 compute-0 nova_compute[185480]: 2026-01-27 19:07:41.943 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.000 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.007 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.067 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.068 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.132 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.133 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.190 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.191 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.262 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.515 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.688 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.689 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5077MB free_disk=72.40121078491211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.690 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.690 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.815 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.816 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.816 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.816 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.876 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.906 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.908 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:07:42 compute-0 nova_compute[185480]: 2026-01-27 19:07:42.908 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:07:43 compute-0 nova_compute[185480]: 2026-01-27 19:07:43.903 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:43 compute-0 nova_compute[185480]: 2026-01-27 19:07:43.903 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:44 compute-0 nova_compute[185480]: 2026-01-27 19:07:44.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:44 compute-0 nova_compute[185480]: 2026-01-27 19:07:44.531 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:45 compute-0 nova_compute[185480]: 2026-01-27 19:07:45.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:45 compute-0 nova_compute[185480]: 2026-01-27 19:07:45.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:07:45 compute-0 nova_compute[185480]: 2026-01-27 19:07:45.860 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:46 compute-0 podman[239858]: 2026-01-27 19:07:46.30742597 +0000 UTC m=+0.086145912 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:07:47 compute-0 nova_compute[185480]: 2026-01-27 19:07:47.517 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:49 compute-0 podman[239878]: 2026-01-27 19:07:49.351559718 +0000 UTC m=+0.119492745 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:07:49 compute-0 podman[239880]: 2026-01-27 19:07:49.356516699 +0000 UTC m=+0.099370574 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 19:07:49 compute-0 podman[239879]: 2026-01-27 19:07:49.393255415 +0000 UTC m=+0.145864958 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:07:50 compute-0 nova_compute[185480]: 2026-01-27 19:07:50.863 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:52 compute-0 nova_compute[185480]: 2026-01-27 19:07:52.520 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:55 compute-0 nova_compute[185480]: 2026-01-27 19:07:55.865 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:57 compute-0 nova_compute[185480]: 2026-01-27 19:07:57.524 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:07:59 compute-0 podman[201378]: time="2026-01-27T19:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:07:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:07:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Jan 27 19:08:00 compute-0 nova_compute[185480]: 2026-01-27 19:08:00.869 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:01 compute-0 openstack_network_exporter[204477]: ERROR   19:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:08:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:08:01 compute-0 openstack_network_exporter[204477]: ERROR   19:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:08:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:08:02 compute-0 nova_compute[185480]: 2026-01-27 19:08:02.526 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:04 compute-0 podman[239944]: 2026-01-27 19:08:04.312407535 +0000 UTC m=+0.079603203 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:08:04 compute-0 podman[239945]: 2026-01-27 19:08:04.337794944 +0000 UTC m=+0.104554702 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 19:08:05 compute-0 nova_compute[185480]: 2026-01-27 19:08:05.872 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:06 compute-0 podman[239988]: 2026-01-27 19:08:06.342831997 +0000 UTC m=+0.119494306 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, distribution-scope=public, name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Jan 27 19:08:07 compute-0 nova_compute[185480]: 2026-01-27 19:08:07.529 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:10 compute-0 nova_compute[185480]: 2026-01-27 19:08:10.875 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:11 compute-0 podman[240007]: 2026-01-27 19:08:11.315103294 +0000 UTC m=+0.080392342 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, maintainer=Red Hat, Inc.)
Jan 27 19:08:12 compute-0 nova_compute[185480]: 2026-01-27 19:08:12.533 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:15 compute-0 nova_compute[185480]: 2026-01-27 19:08:15.879 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:17 compute-0 podman[240028]: 2026-01-27 19:08:17.308018435 +0000 UTC m=+0.077379398 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:08:17 compute-0 nova_compute[185480]: 2026-01-27 19:08:17.537 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:20 compute-0 podman[240049]: 2026-01-27 19:08:20.287905398 +0000 UTC m=+0.064803733 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 19:08:20 compute-0 podman[240051]: 2026-01-27 19:08:20.316063364 +0000 UTC m=+0.082050802 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 19:08:20 compute-0 podman[240050]: 2026-01-27 19:08:20.3708409 +0000 UTC m=+0.140781565 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 19:08:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:08:20.512 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:08:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:08:20.513 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:08:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:08:20.514 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:08:20 compute-0 nova_compute[185480]: 2026-01-27 19:08:20.881 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:22 compute-0 nova_compute[185480]: 2026-01-27 19:08:22.538 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:25 compute-0 nova_compute[185480]: 2026-01-27 19:08:25.884 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:27 compute-0 nova_compute[185480]: 2026-01-27 19:08:27.542 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:29 compute-0 podman[201378]: time="2026-01-27T19:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:08:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:08:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4363 "" "Go-http-client/1.1"
Jan 27 19:08:30 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 19:08:30 compute-0 nova_compute[185480]: 2026-01-27 19:08:30.888 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:31 compute-0 openstack_network_exporter[204477]: ERROR   19:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:08:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:08:31 compute-0 openstack_network_exporter[204477]: ERROR   19:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:08:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.095 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.095 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.103 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'name': 'test_0', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.107 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41f46cfd-06bf-4ef6-85a3-cc6e8629637e', 'name': 'vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.108 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:08:32.108798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.139 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 21766144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.140 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.140 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.166 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.167 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.167 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.168 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.168 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:08:32.168408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.191 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.216 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/memory.usage volume: 49.0078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.217 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.218 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.218 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:08:32.218203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.218 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.219 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.219 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.220 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:08:32.220353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.224 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.228 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes.delta volume: 4759 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.229 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.229 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.229 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.229 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:08:32.229822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.293 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.293 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.294 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.362 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.362 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.363 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.364 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.364 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.364 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.365 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:08:32.364330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.366 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:08:32.366315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.366 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.367 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:08:32.367985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.368 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.368 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.368 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.369 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.369 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.369 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.370 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.370 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.371 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/cpu volume: 34560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.371 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/cpu volume: 75390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:08:32.370824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.372 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.372 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.372 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:08:32.372499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.373 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.373 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.373 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.374 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.375 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.375 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.375 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:08:32.375460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.376 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.377 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.377 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 713872381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.377 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 102610265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.377 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 79720785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.378 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 746276888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.378 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 98242096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.378 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 91644949 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:08:32.377109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.379 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.379 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.379 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.380 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.380 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:08:32.379962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.381 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:08:32.381633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.381 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.382 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets volume: 38 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.383 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.383 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.383 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.384 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:08:32.383137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.384 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.384 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.385 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.386 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes volume: 2244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:08:32.385869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.386 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes volume: 4558 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.387 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:08:32.387377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.387 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 2565149834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.387 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 13830550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.388 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.388 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 1314777326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.388 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 12613240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.388 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.389 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.389 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.390 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.390 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:08:32.390064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.390 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes.delta volume: 4558 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.391 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.391 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:08:32.391582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.392 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.392 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.392 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.392 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.393 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.394 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.395 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.395 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:08:32.394458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.395 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.395 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.396 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.396 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.396 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.396 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:08:32.396227) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.396 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.397 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.398 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:08:32.398032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.398 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.398 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.399 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.399 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.399 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.399 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.400 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.400 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.400 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.401 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes volume: 1920 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.401 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.401 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:08:32.399118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:08:32.401011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:08:32.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:08:32 compute-0 nova_compute[185480]: 2026-01-27 19:08:32.546 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:35 compute-0 podman[240122]: 2026-01-27 19:08:35.297252006 +0000 UTC m=+0.073618416 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:08:35 compute-0 podman[240123]: 2026-01-27 19:08:35.315327117 +0000 UTC m=+0.090514528 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 19:08:35 compute-0 nova_compute[185480]: 2026-01-27 19:08:35.890 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:37 compute-0 podman[240165]: 2026-01-27 19:08:37.313940835 +0000 UTC m=+0.096500235 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 19:08:37 compute-0 nova_compute[185480]: 2026-01-27 19:08:37.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:37 compute-0 nova_compute[185480]: 2026-01-27 19:08:37.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:08:37 compute-0 nova_compute[185480]: 2026-01-27 19:08:37.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:08:37 compute-0 nova_compute[185480]: 2026-01-27 19:08:37.545 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:38 compute-0 nova_compute[185480]: 2026-01-27 19:08:38.055 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:08:38 compute-0 nova_compute[185480]: 2026-01-27 19:08:38.056 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:08:38 compute-0 nova_compute[185480]: 2026-01-27 19:08:38.057 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:08:38 compute-0 nova_compute[185480]: 2026-01-27 19:08:38.057 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:08:39 compute-0 nova_compute[185480]: 2026-01-27 19:08:39.589 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:08:39 compute-0 nova_compute[185480]: 2026-01-27 19:08:39.605 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:08:39 compute-0 nova_compute[185480]: 2026-01-27 19:08:39.605 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:08:40 compute-0 nova_compute[185480]: 2026-01-27 19:08:40.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:40 compute-0 nova_compute[185480]: 2026-01-27 19:08:40.895 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.577 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.577 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.578 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.578 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.723 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.782 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.785 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.844 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.845 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.907 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.909 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.972 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:08:41 compute-0 nova_compute[185480]: 2026-01-27 19:08:41.979 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.048 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.050 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.107 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.109 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.201 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.202 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.271 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:08:42 compute-0 podman[240205]: 2026-01-27 19:08:42.300303176 +0000 UTC m=+0.076512228 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.547 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.583 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.584 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5064MB free_disk=72.39923858642578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.585 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.585 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.686 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.687 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.687 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.687 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.750 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.786 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.788 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:08:42 compute-0 nova_compute[185480]: 2026-01-27 19:08:42.788 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:08:44 compute-0 nova_compute[185480]: 2026-01-27 19:08:44.784 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:44 compute-0 nova_compute[185480]: 2026-01-27 19:08:44.784 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:45 compute-0 nova_compute[185480]: 2026-01-27 19:08:45.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:45 compute-0 nova_compute[185480]: 2026-01-27 19:08:45.898 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:46 compute-0 nova_compute[185480]: 2026-01-27 19:08:46.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:46 compute-0 nova_compute[185480]: 2026-01-27 19:08:46.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:08:47 compute-0 nova_compute[185480]: 2026-01-27 19:08:47.552 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:48 compute-0 podman[240228]: 2026-01-27 19:08:48.298216577 +0000 UTC m=+0.074396445 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 19:08:50 compute-0 nova_compute[185480]: 2026-01-27 19:08:50.903 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:51 compute-0 podman[240249]: 2026-01-27 19:08:51.324379577 +0000 UTC m=+0.098204666 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:08:51 compute-0 podman[240256]: 2026-01-27 19:08:51.359184986 +0000 UTC m=+0.110685050 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 19:08:51 compute-0 podman[240250]: 2026-01-27 19:08:51.369072687 +0000 UTC m=+0.130765230 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 19:08:52 compute-0 nova_compute[185480]: 2026-01-27 19:08:52.552 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:55 compute-0 nova_compute[185480]: 2026-01-27 19:08:55.907 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:57 compute-0 nova_compute[185480]: 2026-01-27 19:08:57.554 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:08:59 compute-0 podman[201378]: time="2026-01-27T19:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:08:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:08:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Jan 27 19:09:00 compute-0 nova_compute[185480]: 2026-01-27 19:09:00.911 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:01 compute-0 openstack_network_exporter[204477]: ERROR   19:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:09:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:09:01 compute-0 openstack_network_exporter[204477]: ERROR   19:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:09:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:09:02 compute-0 nova_compute[185480]: 2026-01-27 19:09:02.556 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:05 compute-0 nova_compute[185480]: 2026-01-27 19:09:05.913 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:06 compute-0 podman[240316]: 2026-01-27 19:09:06.294973221 +0000 UTC m=+0.068925762 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:09:06 compute-0 podman[240317]: 2026-01-27 19:09:06.310800048 +0000 UTC m=+0.075577494 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 27 19:09:07 compute-0 nova_compute[185480]: 2026-01-27 19:09:07.559 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:08 compute-0 podman[240356]: 2026-01-27 19:09:08.314253774 +0000 UTC m=+0.097978151 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 19:09:10 compute-0 nova_compute[185480]: 2026-01-27 19:09:10.916 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:12 compute-0 nova_compute[185480]: 2026-01-27 19:09:12.561 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:13 compute-0 podman[240374]: 2026-01-27 19:09:13.293140293 +0000 UTC m=+0.070672926 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible)
Jan 27 19:09:15 compute-0 nova_compute[185480]: 2026-01-27 19:09:15.919 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:17 compute-0 nova_compute[185480]: 2026-01-27 19:09:17.563 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:19 compute-0 podman[240394]: 2026-01-27 19:09:19.33019969 +0000 UTC m=+0.099431836 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.build-date=20260126)
Jan 27 19:09:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:20.514 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:20.515 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:20.515 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:20 compute-0 nova_compute[185480]: 2026-01-27 19:09:20.922 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:22 compute-0 podman[240415]: 2026-01-27 19:09:22.280211323 +0000 UTC m=+0.059428720 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:09:22 compute-0 podman[240417]: 2026-01-27 19:09:22.338907335 +0000 UTC m=+0.093262685 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 19:09:22 compute-0 podman[240416]: 2026-01-27 19:09:22.371934851 +0000 UTC m=+0.130827382 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 19:09:22 compute-0 nova_compute[185480]: 2026-01-27 19:09:22.565 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:25 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:25.565 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:09:25 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:25.566 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:09:25 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:25.568 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:09:25 compute-0 nova_compute[185480]: 2026-01-27 19:09:25.570 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:25 compute-0 nova_compute[185480]: 2026-01-27 19:09:25.925 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:27 compute-0 nova_compute[185480]: 2026-01-27 19:09:27.572 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:29 compute-0 podman[201378]: time="2026-01-27T19:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:09:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:09:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 27 19:09:30 compute-0 nova_compute[185480]: 2026-01-27 19:09:30.928 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:31 compute-0 openstack_network_exporter[204477]: ERROR   19:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:09:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:09:31 compute-0 openstack_network_exporter[204477]: ERROR   19:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:09:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:09:32 compute-0 nova_compute[185480]: 2026-01-27 19:09:32.572 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:32 compute-0 nova_compute[185480]: 2026-01-27 19:09:32.740 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:32 compute-0 nova_compute[185480]: 2026-01-27 19:09:32.741 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:32 compute-0 nova_compute[185480]: 2026-01-27 19:09:32.765 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:09:32 compute-0 nova_compute[185480]: 2026-01-27 19:09:32.868 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:32 compute-0 nova_compute[185480]: 2026-01-27 19:09:32.869 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:32 compute-0 nova_compute[185480]: 2026-01-27 19:09:32.879 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:09:32 compute-0 nova_compute[185480]: 2026-01-27 19:09:32.880 185484 INFO nova.compute.claims [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.030 185484 DEBUG nova.compute.provider_tree [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.051 185484 DEBUG nova.scheduler.client.report [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.074 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.075 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.130 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.131 185484 DEBUG nova.network.neutron [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.152 185484 INFO nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.189 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.274 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.276 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.277 185484 INFO nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Creating image(s)
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.278 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.279 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.280 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.294 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.385 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.386 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.387 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.398 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.454 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.456 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97,backing_fmt=raw /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.513 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97,backing_fmt=raw /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk 1073741824" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.514 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.515 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.586 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.587 185484 DEBUG nova.virt.disk.api [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Checking if we can resize image /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.587 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.649 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.651 185484 DEBUG nova.virt.disk.api [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Cannot resize image /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.651 185484 DEBUG nova.objects.instance [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 04920b61-96ec-47fc-9d6d-dfdb491e0e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.667 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.667 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.668 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.680 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.735 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.736 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.736 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.747 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.801 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.803 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.918 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 1073741824" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.919 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.920 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.976 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.978 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.979 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Ensure instance console log exists: /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.980 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.981 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:33 compute-0 nova_compute[185480]: 2026-01-27 19:09:33.981 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:34 compute-0 nova_compute[185480]: 2026-01-27 19:09:34.296 185484 DEBUG nova.network.neutron [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Successfully updated port: a9d60848-08b8-4d9d-9aad-b656c10474d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:09:34 compute-0 nova_compute[185480]: 2026-01-27 19:09:34.328 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:09:34 compute-0 nova_compute[185480]: 2026-01-27 19:09:34.329 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquired lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:09:34 compute-0 nova_compute[185480]: 2026-01-27 19:09:34.329 185484 DEBUG nova.network.neutron [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:09:34 compute-0 nova_compute[185480]: 2026-01-27 19:09:34.508 185484 DEBUG nova.network.neutron [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:09:34 compute-0 nova_compute[185480]: 2026-01-27 19:09:34.731 185484 DEBUG nova.compute.manager [req-d56640b7-2e15-419c-9a1d-a5e2692cd52e req-f382947c-2636-4bc9-9a06-50df0e27136a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received event network-changed-a9d60848-08b8-4d9d-9aad-b656c10474d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:09:34 compute-0 nova_compute[185480]: 2026-01-27 19:09:34.731 185484 DEBUG nova.compute.manager [req-d56640b7-2e15-419c-9a1d-a5e2692cd52e req-f382947c-2636-4bc9-9a06-50df0e27136a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Refreshing instance network info cache due to event network-changed-a9d60848-08b8-4d9d-9aad-b656c10474d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:09:34 compute-0 nova_compute[185480]: 2026-01-27 19:09:34.732 185484 DEBUG oslo_concurrency.lockutils [req-d56640b7-2e15-419c-9a1d-a5e2692cd52e req-f382947c-2636-4bc9-9a06-50df0e27136a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.154 185484 DEBUG nova.network.neutron [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updating instance_info_cache with network_info: [{"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.174 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Releasing lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.175 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Instance network_info: |[{"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.175 185484 DEBUG oslo_concurrency.lockutils [req-d56640b7-2e15-419c-9a1d-a5e2692cd52e req-f382947c-2636-4bc9-9a06-50df0e27136a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.176 185484 DEBUG nova.network.neutron [req-d56640b7-2e15-419c-9a1d-a5e2692cd52e req-f382947c-2636-4bc9-9a06-50df0e27136a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Refreshing network info cache for port a9d60848-08b8-4d9d-9aad-b656c10474d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.179 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Start _get_guest_xml network_info=[{"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:03:35Z,direct_url=<?>,disk_format='qcow2',id=525193b7-cb5a-4d63-9747-3b917622bbe3,min_disk=0,min_ram=0,name='cirros',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:03:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.186 185484 WARNING nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.199 185484 DEBUG nova.virt.libvirt.host [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.199 185484 DEBUG nova.virt.libvirt.host [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.204 185484 DEBUG nova.virt.libvirt.host [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.205 185484 DEBUG nova.virt.libvirt.host [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.205 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.206 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:03:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc7c8c58-0a2b-4396-9f89-7ff8e35afa36',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:03:35Z,direct_url=<?>,disk_format='qcow2',id=525193b7-cb5a-4d63-9747-3b917622bbe3,min_disk=0,min_ram=0,name='cirros',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:03:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.206 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.206 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.207 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.207 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.207 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.208 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.208 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.208 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.209 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.209 185484 DEBUG nova.virt.hardware [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.213 185484 DEBUG nova.virt.libvirt.vif [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6',id=3,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-jdpfr1nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:09:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDQzOTYwNDQ5MDUzNzkwNjA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 19:09:35 compute-0 nova_compute[185480]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDQzOTYwNDQ5MDUzNzkwNjA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=04920b61-96ec-47fc-9d6d-dfdb491e0e77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.213 185484 DEBUG nova.network.os_vif_util [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.214 185484 DEBUG nova.network.os_vif_util [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:40:8d,bridge_name='br-int',has_traffic_filtering=True,id=a9d60848-08b8-4d9d-9aad-b656c10474d8,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9d60848-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.215 185484 DEBUG nova.objects.instance [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 04920b61-96ec-47fc-9d6d-dfdb491e0e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.234 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <uuid>04920b61-96ec-47fc-9d6d-dfdb491e0e77</uuid>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <name>instance-00000003</name>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <memory>524288</memory>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <nova:name>vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6</nova:name>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:09:35</nova:creationTime>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <nova:flavor name="m1.small">
Jan 27 19:09:35 compute-0 nova_compute[185480]:         <nova:memory>512</nova:memory>
Jan 27 19:09:35 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:09:35 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:09:35 compute-0 nova_compute[185480]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 19:09:35 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:09:35 compute-0 nova_compute[185480]:         <nova:user uuid="6d30d46dc88a4403b3a241949384d8f7">admin</nova:user>
Jan 27 19:09:35 compute-0 nova_compute[185480]:         <nova:project uuid="f04ec1493db14ca1adbb4b6abd1667b1">admin</nova:project>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="525193b7-cb5a-4d63-9747-3b917622bbe3"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:09:35 compute-0 nova_compute[185480]:         <nova:port uuid="a9d60848-08b8-4d9d-9aad-b656c10474d8">
Jan 27 19:09:35 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="192.168.0.63" ipVersion="4"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <system>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <entry name="serial">04920b61-96ec-47fc-9d6d-dfdb491e0e77</entry>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <entry name="uuid">04920b61-96ec-47fc-9d6d-dfdb491e0e77</entry>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </system>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <os>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   </os>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <features>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   </features>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <target dev="vdb" bus="virtio"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.config"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:42:40:8d"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <target dev="tapa9d60848-08"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/console.log" append="off"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <video>
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </video>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:09:35 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:09:35 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:09:35 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:09:35 compute-0 nova_compute[185480]: </domain>
Jan 27 19:09:35 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.236 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Preparing to wait for external event network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.236 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.237 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.237 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.238 185484 DEBUG nova.virt.libvirt.vif [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6',id=3,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-jdpfr1nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:09:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDQzOTYwNDQ5MDUzNzkwNjA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 27 19:09:35 compute-0 nova_compute[185480]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDQzOTYwNDQ5MDUzNzkwNjA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=04920b61-96ec-47fc-9d6d-dfdb491e0e77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.239 185484 DEBUG nova.network.os_vif_util [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.239 185484 DEBUG nova.network.os_vif_util [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:40:8d,bridge_name='br-int',has_traffic_filtering=True,id=a9d60848-08b8-4d9d-9aad-b656c10474d8,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9d60848-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.240 185484 DEBUG os_vif [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:40:8d,bridge_name='br-int',has_traffic_filtering=True,id=a9d60848-08b8-4d9d-9aad-b656c10474d8,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9d60848-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.241 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.241 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.242 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.246 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.246 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9d60848-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.246 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa9d60848-08, col_values=(('external_ids', {'iface-id': 'a9d60848-08b8-4d9d-9aad-b656c10474d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:40:8d', 'vm-uuid': '04920b61-96ec-47fc-9d6d-dfdb491e0e77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.248 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:35 compute-0 NetworkManager[56191]: <info>  [1769540975.2493] manager: (tapa9d60848-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.250 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.258 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.260 185484 INFO os_vif [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:40:8d,bridge_name='br-int',has_traffic_filtering=True,id=a9d60848-08b8-4d9d-9aad-b656c10474d8,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9d60848-08')
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.318 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.318 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.319 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.319 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No VIF found with MAC fa:16:3e:42:40:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.320 185484 INFO nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Using config drive
Jan 27 19:09:35 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:09:35.213 185484 DEBUG nova.virt.libvirt.vif [None req-b9267884-80 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:09:35 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:09:35.238 185484 DEBUG nova.virt.libvirt.vif [None req-b9267884-80 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.645 185484 INFO nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Creating config drive at /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.config
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.659 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbmvsbm2m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.808 185484 DEBUG oslo_concurrency.processutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbmvsbm2m" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:35 compute-0 kernel: tapa9d60848-08: entered promiscuous mode
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.906 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:35 compute-0 NetworkManager[56191]: <info>  [1769540975.9078] manager: (tapa9d60848-08): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 27 19:09:35 compute-0 ovn_controller[97647]: 2026-01-27T19:09:35Z|00040|binding|INFO|Claiming lport a9d60848-08b8-4d9d-9aad-b656c10474d8 for this chassis.
Jan 27 19:09:35 compute-0 ovn_controller[97647]: 2026-01-27T19:09:35Z|00041|binding|INFO|a9d60848-08b8-4d9d-9aad-b656c10474d8: Claiming fa:16:3e:42:40:8d 192.168.0.63
Jan 27 19:09:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:35.929 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:40:8d 192.168.0.63'], port_security=['fa:16:3e:42:40:8d 192.168.0.63'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-z6txynvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-port-olblivelu66d', 'neutron:cidrs': '192.168.0.63/24', 'neutron:device_id': '04920b61-96ec-47fc-9d6d-dfdb491e0e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-z6txynvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-port-olblivelu66d', 'neutron:project_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a99badb-bb64-4e2f-95a8-78f317eb6676', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33ebbec8-56f4-45ac-84a6-f80dd4a7c167, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=a9d60848-08b8-4d9d-9aad-b656c10474d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:09:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:35.930 106898 INFO neutron.agent.ovn.metadata.agent [-] Port a9d60848-08b8-4d9d-9aad-b656c10474d8 in datapath 4f32262d-dee8-406b-8a5a-09e95f48c8d5 bound to our chassis
Jan 27 19:09:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:35.931 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:09:35 compute-0 ovn_controller[97647]: 2026-01-27T19:09:35Z|00042|binding|INFO|Setting lport a9d60848-08b8-4d9d-9aad-b656c10474d8 ovn-installed in OVS
Jan 27 19:09:35 compute-0 ovn_controller[97647]: 2026-01-27T19:09:35Z|00043|binding|INFO|Setting lport a9d60848-08b8-4d9d-9aad-b656c10474d8 up in Southbound
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.936 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:35 compute-0 nova_compute[185480]: 2026-01-27 19:09:35.937 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:35.951 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[f37a6cb0-5d50-4964-b3eb-93b89c7f48c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:09:35 compute-0 systemd-machined[156762]: New machine qemu-3-instance-00000003.
Jan 27 19:09:35 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 27 19:09:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:35.994 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2dea02-3a6f-4b43-819a-e24f23fb0e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:09:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:35.997 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[1349d678-1b20-433d-83a6-d24493e74637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:09:36 compute-0 systemd-udevd[240528]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:09:36 compute-0 NetworkManager[56191]: <info>  [1769540976.0169] device (tapa9d60848-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:09:36 compute-0 NetworkManager[56191]: <info>  [1769540976.0190] device (tapa9d60848-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:09:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:36.035 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[3994252b-c04e-416b-93eb-69b376e0c952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:09:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:36.054 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[279aa063-c1a5-472f-97e7-5f036a415ca4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f32262d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:bd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383551, 'reachable_time': 43795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240536, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:09:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:36.068 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[64a69978-959b-4b8b-8575-f710eb0b953e]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383562, 'tstamp': 383562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240539, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383566, 'tstamp': 383566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240539, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:09:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:36.069 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f32262d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.071 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.074 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:36.077 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f32262d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:09:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:36.077 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:09:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:36.077 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f32262d-d0, col_values=(('external_ids', {'iface-id': '5950ebf0-6d13-4405-b07d-fec152665bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:09:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:09:36.078 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.166 185484 DEBUG nova.compute.manager [req-55093574-339f-4d5d-ad20-d170d962ce21 req-2e30b30a-f24a-4687-a0b2-5c0cbb0fa6df bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received event network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.166 185484 DEBUG oslo_concurrency.lockutils [req-55093574-339f-4d5d-ad20-d170d962ce21 req-2e30b30a-f24a-4687-a0b2-5c0cbb0fa6df bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.166 185484 DEBUG oslo_concurrency.lockutils [req-55093574-339f-4d5d-ad20-d170d962ce21 req-2e30b30a-f24a-4687-a0b2-5c0cbb0fa6df bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.167 185484 DEBUG oslo_concurrency.lockutils [req-55093574-339f-4d5d-ad20-d170d962ce21 req-2e30b30a-f24a-4687-a0b2-5c0cbb0fa6df bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.167 185484 DEBUG nova.compute.manager [req-55093574-339f-4d5d-ad20-d170d962ce21 req-2e30b30a-f24a-4687-a0b2-5c0cbb0fa6df bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Processing event network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.267 185484 DEBUG nova.network.neutron [req-d56640b7-2e15-419c-9a1d-a5e2692cd52e req-f382947c-2636-4bc9-9a06-50df0e27136a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updated VIF entry in instance network info cache for port a9d60848-08b8-4d9d-9aad-b656c10474d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.268 185484 DEBUG nova.network.neutron [req-d56640b7-2e15-419c-9a1d-a5e2692cd52e req-f382947c-2636-4bc9-9a06-50df0e27136a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updating instance_info_cache with network_info: [{"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.283 185484 DEBUG oslo_concurrency.lockutils [req-d56640b7-2e15-419c-9a1d-a5e2692cd52e req-f382947c-2636-4bc9-9a06-50df0e27136a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.432 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.439 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540976.438632, 04920b61-96ec-47fc-9d6d-dfdb491e0e77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.439 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] VM Started (Lifecycle Event)
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.446 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.453 185484 INFO nova.virt.libvirt.driver [-] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Instance spawned successfully.
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.454 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.457 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.462 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.482 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.483 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540976.4387794, 04920b61-96ec-47fc-9d6d-dfdb491e0e77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.483 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] VM Paused (Lifecycle Event)
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.489 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.490 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.491 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.491 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.492 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.493 185484 DEBUG nova.virt.libvirt.driver [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.507 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.513 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769540976.4453936, 04920b61-96ec-47fc-9d6d-dfdb491e0e77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.514 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] VM Resumed (Lifecycle Event)
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.543 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.548 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.556 185484 INFO nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Took 3.28 seconds to spawn the instance on the hypervisor.
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.556 185484 DEBUG nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.566 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.618 185484 INFO nova.compute.manager [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Took 3.79 seconds to build instance.
Jan 27 19:09:36 compute-0 nova_compute[185480]: 2026-01-27 19:09:36.634 185484 DEBUG oslo_concurrency.lockutils [None req-b9267884-80b9-4f56-b5a6-d5e65e0c0fb6 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 3.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:36 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 19:09:36 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 19:09:36 compute-0 podman[240548]: 2026-01-27 19:09:36.851440655 +0000 UTC m=+0.099456396 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:09:36 compute-0 podman[240549]: 2026-01-27 19:09:36.874231812 +0000 UTC m=+0.118138343 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 19:09:37 compute-0 nova_compute[185480]: 2026-01-27 19:09:37.574 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:38 compute-0 nova_compute[185480]: 2026-01-27 19:09:38.288 185484 DEBUG nova.compute.manager [req-47b0ad99-c993-4531-aa5e-9e99da02e66e req-b260d3cc-4c96-4eab-a8d4-6422211d9854 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received event network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:09:38 compute-0 nova_compute[185480]: 2026-01-27 19:09:38.288 185484 DEBUG oslo_concurrency.lockutils [req-47b0ad99-c993-4531-aa5e-9e99da02e66e req-b260d3cc-4c96-4eab-a8d4-6422211d9854 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:38 compute-0 nova_compute[185480]: 2026-01-27 19:09:38.289 185484 DEBUG oslo_concurrency.lockutils [req-47b0ad99-c993-4531-aa5e-9e99da02e66e req-b260d3cc-4c96-4eab-a8d4-6422211d9854 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:38 compute-0 nova_compute[185480]: 2026-01-27 19:09:38.289 185484 DEBUG oslo_concurrency.lockutils [req-47b0ad99-c993-4531-aa5e-9e99da02e66e req-b260d3cc-4c96-4eab-a8d4-6422211d9854 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:38 compute-0 nova_compute[185480]: 2026-01-27 19:09:38.289 185484 DEBUG nova.compute.manager [req-47b0ad99-c993-4531-aa5e-9e99da02e66e req-b260d3cc-4c96-4eab-a8d4-6422211d9854 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] No waiting events found dispatching network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:09:38 compute-0 nova_compute[185480]: 2026-01-27 19:09:38.289 185484 WARNING nova.compute.manager [req-47b0ad99-c993-4531-aa5e-9e99da02e66e req-b260d3cc-4c96-4eab-a8d4-6422211d9854 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received unexpected event network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 for instance with vm_state active and task_state None.
Jan 27 19:09:39 compute-0 podman[240611]: 2026-01-27 19:09:39.308169507 +0000 UTC m=+0.078469535 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-type=git, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, name=ubi9, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:09:39 compute-0 nova_compute[185480]: 2026-01-27 19:09:39.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:39 compute-0 nova_compute[185480]: 2026-01-27 19:09:39.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:09:40 compute-0 nova_compute[185480]: 2026-01-27 19:09:40.159 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:09:40 compute-0 nova_compute[185480]: 2026-01-27 19:09:40.160 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:09:40 compute-0 nova_compute[185480]: 2026-01-27 19:09:40.160 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:09:40 compute-0 nova_compute[185480]: 2026-01-27 19:09:40.250 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:41 compute-0 nova_compute[185480]: 2026-01-27 19:09:41.963 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updating instance_info_cache with network_info: [{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:09:41 compute-0 nova_compute[185480]: 2026-01-27 19:09:41.982 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:09:41 compute-0 nova_compute[185480]: 2026-01-27 19:09:41.982 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:09:41 compute-0 nova_compute[185480]: 2026-01-27 19:09:41.982 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:41 compute-0 nova_compute[185480]: 2026-01-27 19:09:41.982 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.000 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.001 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.001 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.001 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.083 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.175 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.176 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.238 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.239 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.308 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.309 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.376 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.391 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.450 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.451 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.517 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.519 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.577 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.595 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.596 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.656 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.668 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.759 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.761 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.827 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.830 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.896 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.897 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:09:42 compute-0 nova_compute[185480]: 2026-01-27 19:09:42.958 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.360 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.362 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=72.39818572998047GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.363 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.364 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.473 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.475 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.475 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.475 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.476 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.573 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.590 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.613 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:09:43 compute-0 nova_compute[185480]: 2026-01-27 19:09:43.614 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:09:44 compute-0 nova_compute[185480]: 2026-01-27 19:09:44.147 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:44 compute-0 nova_compute[185480]: 2026-01-27 19:09:44.147 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:44 compute-0 nova_compute[185480]: 2026-01-27 19:09:44.148 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:44 compute-0 nova_compute[185480]: 2026-01-27 19:09:44.148 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:09:44 compute-0 podman[240670]: 2026-01-27 19:09:44.317160156 +0000 UTC m=+0.087772654 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Jan 27 19:09:45 compute-0 nova_compute[185480]: 2026-01-27 19:09:45.253 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:45 compute-0 nova_compute[185480]: 2026-01-27 19:09:45.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:47 compute-0 nova_compute[185480]: 2026-01-27 19:09:47.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:47 compute-0 nova_compute[185480]: 2026-01-27 19:09:47.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:47 compute-0 nova_compute[185480]: 2026-01-27 19:09:47.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:09:47 compute-0 nova_compute[185480]: 2026-01-27 19:09:47.578 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:50 compute-0 nova_compute[185480]: 2026-01-27 19:09:50.257 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:50 compute-0 podman[240691]: 2026-01-27 19:09:50.304014799 +0000 UTC m=+0.068723369 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126)
Jan 27 19:09:52 compute-0 sshd-session[240711]: Invalid user sol from 45.148.10.240 port 48388
Jan 27 19:09:52 compute-0 sshd-session[240711]: Connection closed by invalid user sol 45.148.10.240 port 48388 [preauth]
Jan 27 19:09:52 compute-0 nova_compute[185480]: 2026-01-27 19:09:52.582 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:53 compute-0 podman[240713]: 2026-01-27 19:09:53.305186105 +0000 UTC m=+0.078630081 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 19:09:53 compute-0 podman[240715]: 2026-01-27 19:09:53.34590219 +0000 UTC m=+0.108553023 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 19:09:53 compute-0 podman[240714]: 2026-01-27 19:09:53.364628807 +0000 UTC m=+0.134192789 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 19:09:55 compute-0 nova_compute[185480]: 2026-01-27 19:09:55.259 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:57 compute-0 nova_compute[185480]: 2026-01-27 19:09:57.583 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:09:59 compute-0 podman[201378]: time="2026-01-27T19:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:09:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:09:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 27 19:10:00 compute-0 nova_compute[185480]: 2026-01-27 19:10:00.262 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:01 compute-0 openstack_network_exporter[204477]: ERROR   19:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:10:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:10:01 compute-0 openstack_network_exporter[204477]: ERROR   19:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:10:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:10:02 compute-0 nova_compute[185480]: 2026-01-27 19:10:02.586 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:05 compute-0 nova_compute[185480]: 2026-01-27 19:10:05.265 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:05 compute-0 ovn_controller[97647]: 2026-01-27T19:10:05Z|00044|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 27 19:10:07 compute-0 podman[240777]: 2026-01-27 19:10:07.317496595 +0000 UTC m=+0.100078826 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:10:07 compute-0 podman[240778]: 2026-01-27 19:10:07.326224088 +0000 UTC m=+0.099881711 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 19:10:07 compute-0 nova_compute[185480]: 2026-01-27 19:10:07.588 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:10 compute-0 nova_compute[185480]: 2026-01-27 19:10:10.268 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:10 compute-0 podman[240832]: 2026-01-27 19:10:10.305063578 +0000 UTC m=+0.075497825 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=kepler, release-0.7.12=, container_name=kepler, release=1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, io.openshift.tags=base rhel9, io.openshift.expose-services=)
Jan 27 19:10:10 compute-0 ovn_controller[97647]: 2026-01-27T19:10:10Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:40:8d 192.168.0.63
Jan 27 19:10:10 compute-0 ovn_controller[97647]: 2026-01-27T19:10:10Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:40:8d 192.168.0.63
Jan 27 19:10:12 compute-0 nova_compute[185480]: 2026-01-27 19:10:12.590 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:14 compute-0 podman[240852]: 2026-01-27 19:10:14.791283736 +0000 UTC m=+0.106322977 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 19:10:15 compute-0 nova_compute[185480]: 2026-01-27 19:10:15.270 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:17 compute-0 nova_compute[185480]: 2026-01-27 19:10:17.597 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:20 compute-0 nova_compute[185480]: 2026-01-27 19:10:20.273 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:10:20.516 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:10:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:10:20.517 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:10:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:10:20.518 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:10:21 compute-0 podman[240874]: 2026-01-27 19:10:21.28417915 +0000 UTC m=+0.066349462 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260126, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:10:22 compute-0 nova_compute[185480]: 2026-01-27 19:10:22.599 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:24 compute-0 podman[240894]: 2026-01-27 19:10:24.310839548 +0000 UTC m=+0.076648462 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:10:24 compute-0 podman[240896]: 2026-01-27 19:10:24.327341781 +0000 UTC m=+0.079287787 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 19:10:24 compute-0 podman[240895]: 2026-01-27 19:10:24.395965337 +0000 UTC m=+0.155010346 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 19:10:25 compute-0 nova_compute[185480]: 2026-01-27 19:10:25.276 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:27 compute-0 nova_compute[185480]: 2026-01-27 19:10:27.602 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:29 compute-0 podman[201378]: time="2026-01-27T19:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:10:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:10:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 27 19:10:30 compute-0 nova_compute[185480]: 2026-01-27 19:10:30.278 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:31 compute-0 openstack_network_exporter[204477]: ERROR   19:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:10:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:10:31 compute-0 openstack_network_exporter[204477]: ERROR   19:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:10:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.095 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.096 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.104 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'name': 'test_0', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.107 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41f46cfd-06bf-4ef6-85a3-cc6e8629637e', 'name': 'vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.111 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.112 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/04920b61-96ec-47fc-9d6d-dfdb491e0e77 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.504 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 27 Jan 2026 19:10:32 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b84a90a6-2464-4e4a-90ab-a86660f0b95e x-openstack-request-id: req-b84a90a6-2464-4e4a-90ab-a86660f0b95e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.504 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "04920b61-96ec-47fc-9d6d-dfdb491e0e77", "name": "vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6", "status": "ACTIVE", "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "user_id": "6d30d46dc88a4403b3a241949384d8f7", "metadata": {"metering.server_group": "bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871"}, "hostId": "d5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0", "image": {"id": "525193b7-cb5a-4d63-9747-3b917622bbe3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/525193b7-cb5a-4d63-9747-3b917622bbe3"}]}, "flavor": {"id": "bc7c8c58-0a2b-4396-9f89-7ff8e35afa36", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc7c8c58-0a2b-4396-9f89-7ff8e35afa36"}]}, "created": "2026-01-27T19:09:31Z", "updated": "2026-01-27T19:09:36Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.63", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:42:40:8d"}, {"version": 4, "addr": "192.168.122.186", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:42:40:8d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/04920b61-96ec-47fc-9d6d-dfdb491e0e77"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/04920b61-96ec-47fc-9d6d-dfdb491e0e77"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T19:09:36.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.504 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/04920b61-96ec-47fc-9d6d-dfdb491e0e77 used request id req-b84a90a6-2464-4e4a-90ab-a86660f0b95e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.506 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '04920b61-96ec-47fc-9d6d-dfdb491e0e77', 'name': 'vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.506 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:10:32.507103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.536 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 21766144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.536 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.536 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.564 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.565 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.565 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.592 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.592 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.593 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.593 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:10:32.594228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 nova_compute[185480]: 2026-01-27 19:10:32.604 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.623 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.645 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/memory.usage volume: 49.0078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.671 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/memory.usage volume: 49.57421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.672 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.673 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.673 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.673 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:10:32.673106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:10:32.675285) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.679 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.683 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.686 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 04920b61-96ec-47fc-9d6d-dfdb491e0e77 / tapa9d60848-08 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.687 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.687 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:10:32.688065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.758 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.759 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.759 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.834 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.834 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.834 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.896 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.897 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.897 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.897 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.898 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.898 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.898 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:10:32.898113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.899 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:10:32.899517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.900 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.901 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.901 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.901 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.901 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.901 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.902 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.902 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.902 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:10:32.900858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.903 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.903 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/cpu volume: 36000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:10:32.903750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.904 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/cpu volume: 195560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.904 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/cpu volume: 33770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.904 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.904 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.905 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.905 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:10:32.905144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.905 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.905 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.906 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.906 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.906 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.906 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.906 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.907 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.908 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.908 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:10:32.907892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.908 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.909 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.909 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 713872381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.909 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 102610265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.909 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 79720785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.910 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 746276888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.910 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 98242096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:10:32.909351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.910 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 91644949 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.910 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 623753590 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.911 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 105958430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.911 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 83308410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.912 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.912 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.912 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:10:32.912202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:10:32.913637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.913 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.914 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.914 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.915 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.915 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.915 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.915 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.916 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.916 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.916 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.916 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.917 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:10:32.915101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.918 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:10:32.918051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.918 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes volume: 4628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.918 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.bytes volume: 1991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.919 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.919 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 2565149834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:10:32.919655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.920 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 13830550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.920 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.920 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 1314777326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.920 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 12613240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.921 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.921 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 1269545911 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.921 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 10814945 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.922 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.922 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.922 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.923 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.923 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.923 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:10:32.923071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:10:32.924627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.925 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.925 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.925 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.925 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.926 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.926 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.926 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.926 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.927 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.927 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.927 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.927 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:10:32.927565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.928 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6>]
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.929 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T19:10:32.928756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.929 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:10:32.929650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.930 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.930 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.931 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.931 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:10:32.930963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.932 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:10:32.931957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.932 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.932 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.932 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.932 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.933 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.933 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.933 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.933 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.934 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.934 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes volume: 2004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.934 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:10:32.934520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.935 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.935 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.936 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T19:10:32.936024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.936 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6>]
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.936 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.937 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:10:32.938 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:10:35 compute-0 nova_compute[185480]: 2026-01-27 19:10:35.282 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:37 compute-0 nova_compute[185480]: 2026-01-27 19:10:37.607 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:38 compute-0 podman[240959]: 2026-01-27 19:10:38.310886038 +0000 UTC m=+0.081505371 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:10:38 compute-0 podman[240960]: 2026-01-27 19:10:38.340352409 +0000 UTC m=+0.098230931 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:10:39 compute-0 nova_compute[185480]: 2026-01-27 19:10:39.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:39 compute-0 nova_compute[185480]: 2026-01-27 19:10:39.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:10:39 compute-0 nova_compute[185480]: 2026-01-27 19:10:39.517 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:10:39 compute-0 nova_compute[185480]: 2026-01-27 19:10:39.728 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:10:39 compute-0 nova_compute[185480]: 2026-01-27 19:10:39.729 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:10:39 compute-0 nova_compute[185480]: 2026-01-27 19:10:39.730 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:10:39 compute-0 nova_compute[185480]: 2026-01-27 19:10:39.730 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:10:40 compute-0 nova_compute[185480]: 2026-01-27 19:10:40.286 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:40 compute-0 nova_compute[185480]: 2026-01-27 19:10:40.983 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:10:41 compute-0 podman[241002]: 2026-01-27 19:10:41.296922894 +0000 UTC m=+0.077020722 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 19:10:41 compute-0 nova_compute[185480]: 2026-01-27 19:10:41.370 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:10:41 compute-0 nova_compute[185480]: 2026-01-27 19:10:41.371 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:10:41 compute-0 nova_compute[185480]: 2026-01-27 19:10:41.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:41 compute-0 nova_compute[185480]: 2026-01-27 19:10:41.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 19:10:41 compute-0 nova_compute[185480]: 2026-01-27 19:10:41.571 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 19:10:42 compute-0 nova_compute[185480]: 2026-01-27 19:10:42.572 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:42 compute-0 nova_compute[185480]: 2026-01-27 19:10:42.611 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:42 compute-0 nova_compute[185480]: 2026-01-27 19:10:42.817 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:10:42 compute-0 nova_compute[185480]: 2026-01-27 19:10:42.818 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:10:42 compute-0 nova_compute[185480]: 2026-01-27 19:10:42.819 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:10:42 compute-0 nova_compute[185480]: 2026-01-27 19:10:42.819 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.246 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.313 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.314 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.374 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.376 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.436 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.437 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.490 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.497 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.569 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.571 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.632 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.635 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.694 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.696 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.753 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.761 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.823 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.824 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.887 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.890 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.960 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:43 compute-0 nova_compute[185480]: 2026-01-27 19:10:43.961 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.018 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.384 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.386 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4874MB free_disk=72.3766860961914GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.387 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.388 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.809 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.811 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.812 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.813 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.813 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.876 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing inventories for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.934 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating ProviderTree inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.935 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.954 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing aggregate associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 19:10:44 compute-0 nova_compute[185480]: 2026-01-27 19:10:44.977 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing trait associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, traits: HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 19:10:45 compute-0 nova_compute[185480]: 2026-01-27 19:10:45.055 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:10:45 compute-0 nova_compute[185480]: 2026-01-27 19:10:45.127 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:10:45 compute-0 nova_compute[185480]: 2026-01-27 19:10:45.131 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:10:45 compute-0 nova_compute[185480]: 2026-01-27 19:10:45.132 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:10:45 compute-0 nova_compute[185480]: 2026-01-27 19:10:45.134 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:45 compute-0 nova_compute[185480]: 2026-01-27 19:10:45.289 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:45 compute-0 podman[241059]: 2026-01-27 19:10:45.314207279 +0000 UTC m=+0.080837745 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=openstack_network_exporter, name=ubi9-minimal, version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 19:10:46 compute-0 nova_compute[185480]: 2026-01-27 19:10:46.184 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:46 compute-0 nova_compute[185480]: 2026-01-27 19:10:46.185 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:46 compute-0 nova_compute[185480]: 2026-01-27 19:10:46.186 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:46 compute-0 nova_compute[185480]: 2026-01-27 19:10:46.186 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:46 compute-0 nova_compute[185480]: 2026-01-27 19:10:46.187 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:10:47 compute-0 nova_compute[185480]: 2026-01-27 19:10:47.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:47 compute-0 nova_compute[185480]: 2026-01-27 19:10:47.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:47 compute-0 nova_compute[185480]: 2026-01-27 19:10:47.518 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 19:10:47 compute-0 nova_compute[185480]: 2026-01-27 19:10:47.614 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:49 compute-0 nova_compute[185480]: 2026-01-27 19:10:49.703 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:49 compute-0 nova_compute[185480]: 2026-01-27 19:10:49.705 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:10:50 compute-0 nova_compute[185480]: 2026-01-27 19:10:50.292 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:52 compute-0 podman[241081]: 2026-01-27 19:10:52.332096686 +0000 UTC m=+0.103347355 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:10:52 compute-0 nova_compute[185480]: 2026-01-27 19:10:52.616 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:55 compute-0 nova_compute[185480]: 2026-01-27 19:10:55.296 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:55 compute-0 podman[241100]: 2026-01-27 19:10:55.299537317 +0000 UTC m=+0.073729242 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:10:55 compute-0 podman[241102]: 2026-01-27 19:10:55.315821736 +0000 UTC m=+0.091695002 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 19:10:55 compute-0 podman[241101]: 2026-01-27 19:10:55.337899564 +0000 UTC m=+0.115655065 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:10:57 compute-0 nova_compute[185480]: 2026-01-27 19:10:57.622 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:10:59 compute-0 podman[201378]: time="2026-01-27T19:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:10:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:10:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4369 "" "Go-http-client/1.1"
Jan 27 19:11:00 compute-0 nova_compute[185480]: 2026-01-27 19:11:00.298 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:01 compute-0 openstack_network_exporter[204477]: ERROR   19:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:11:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:11:01 compute-0 openstack_network_exporter[204477]: ERROR   19:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:11:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:11:02 compute-0 nova_compute[185480]: 2026-01-27 19:11:02.624 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:05 compute-0 nova_compute[185480]: 2026-01-27 19:11:05.299 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:07 compute-0 nova_compute[185480]: 2026-01-27 19:11:07.626 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:09 compute-0 podman[241163]: 2026-01-27 19:11:09.351624429 +0000 UTC m=+0.120468404 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 19:11:09 compute-0 podman[241162]: 2026-01-27 19:11:09.369179338 +0000 UTC m=+0.137109850 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:11:10 compute-0 nova_compute[185480]: 2026-01-27 19:11:10.303 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:12 compute-0 podman[241205]: 2026-01-27 19:11:12.351218216 +0000 UTC m=+0.103237423 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., version=9.4, config_id=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:11:12 compute-0 nova_compute[185480]: 2026-01-27 19:11:12.630 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:15 compute-0 nova_compute[185480]: 2026-01-27 19:11:15.307 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:16 compute-0 podman[241225]: 2026-01-27 19:11:16.327733125 +0000 UTC m=+0.096746744 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 19:11:17 compute-0 nova_compute[185480]: 2026-01-27 19:11:17.632 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:20 compute-0 nova_compute[185480]: 2026-01-27 19:11:20.310 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:20.517 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:20.517 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:20.518 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:22 compute-0 nova_compute[185480]: 2026-01-27 19:11:22.634 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:23 compute-0 podman[241247]: 2026-01-27 19:11:23.300593422 +0000 UTC m=+0.074463880 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 27 19:11:25 compute-0 nova_compute[185480]: 2026-01-27 19:11:25.314 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:26 compute-0 podman[241269]: 2026-01-27 19:11:26.307939388 +0000 UTC m=+0.075340731 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:11:26 compute-0 podman[241267]: 2026-01-27 19:11:26.31948862 +0000 UTC m=+0.084197217 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:11:26 compute-0 podman[241268]: 2026-01-27 19:11:26.371199313 +0000 UTC m=+0.128212863 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:11:27 compute-0 nova_compute[185480]: 2026-01-27 19:11:27.636 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:28.150 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:11:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:28.152 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:11:28 compute-0 nova_compute[185480]: 2026-01-27 19:11:28.156 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:28 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:28.158 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:11:29 compute-0 podman[201378]: time="2026-01-27T19:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:11:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:11:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Jan 27 19:11:30 compute-0 nova_compute[185480]: 2026-01-27 19:11:30.317 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:31 compute-0 openstack_network_exporter[204477]: ERROR   19:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:11:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:11:31 compute-0 openstack_network_exporter[204477]: ERROR   19:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:11:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:11:32 compute-0 nova_compute[185480]: 2026-01-27 19:11:32.638 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.541 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.542 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.567 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.665 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.666 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.677 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.677 185484 INFO nova.compute.claims [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.835 185484 DEBUG nova.compute.provider_tree [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.851 185484 DEBUG nova.scheduler.client.report [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.888 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.889 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.936 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.937 185484 DEBUG nova.network.neutron [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.958 185484 INFO nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:11:34 compute-0 nova_compute[185480]: 2026-01-27 19:11:34.993 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.085 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.087 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.088 185484 INFO nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Creating image(s)
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.089 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.090 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.091 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.116 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.176 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.177 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.178 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.194 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.250 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.251 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97,backing_fmt=raw /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.290 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97,backing_fmt=raw /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.291 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "22a6540665f986d7c7b2a582b40f6fa8e7910e97" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.291 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.321 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.362 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/22a6540665f986d7c7b2a582b40f6fa8e7910e97 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.364 185484 DEBUG nova.virt.disk.api [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Checking if we can resize image /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.364 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.439 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.440 185484 DEBUG nova.virt.disk.api [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Cannot resize image /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.440 185484 DEBUG nova.objects.instance [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 92ff85a4-5620-4dd0-8930-62b7f561edf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.455 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.456 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.457 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.470 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.531 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.532 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.533 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.546 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.611 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.612 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.658 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.660 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.661 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.725 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.726 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.727 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Ensure instance console log exists: /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.728 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.728 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:35 compute-0 nova_compute[185480]: 2026-01-27 19:11:35.729 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:36 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 19:11:37 compute-0 nova_compute[185480]: 2026-01-27 19:11:37.641 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.518 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.690 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.690 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.691 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.701 185484 DEBUG nova.network.neutron [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Successfully updated port: 31c187f6-645a-4415-a7a2-7c358adeb7c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.723 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.724 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquired lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.724 185484 DEBUG nova.network.neutron [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.825 185484 DEBUG nova.compute.manager [req-544d4fa1-e3bb-4a7f-92c6-37e6f2b66103 req-82896e17-618e-4f73-b7c4-e329f33e45e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received event network-changed-31c187f6-645a-4415-a7a2-7c358adeb7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.826 185484 DEBUG nova.compute.manager [req-544d4fa1-e3bb-4a7f-92c6-37e6f2b66103 req-82896e17-618e-4f73-b7c4-e329f33e45e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Refreshing instance network info cache due to event network-changed-31c187f6-645a-4415-a7a2-7c358adeb7c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.827 185484 DEBUG oslo_concurrency.lockutils [req-544d4fa1-e3bb-4a7f-92c6-37e6f2b66103 req-82896e17-618e-4f73-b7c4-e329f33e45e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:11:39 compute-0 nova_compute[185480]: 2026-01-27 19:11:39.946 185484 DEBUG nova.network.neutron [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:11:40 compute-0 podman[241360]: 2026-01-27 19:11:40.319621942 +0000 UTC m=+0.090629965 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 19:11:40 compute-0 nova_compute[185480]: 2026-01-27 19:11:40.324 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:40 compute-0 podman[241359]: 2026-01-27 19:11:40.338594736 +0000 UTC m=+0.102819863 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.398 185484 DEBUG nova.network.neutron [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updating instance_info_cache with network_info: [{"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.423 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Releasing lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.424 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Instance network_info: |[{"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.424 185484 DEBUG oslo_concurrency.lockutils [req-544d4fa1-e3bb-4a7f-92c6-37e6f2b66103 req-82896e17-618e-4f73-b7c4-e329f33e45e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.425 185484 DEBUG nova.network.neutron [req-544d4fa1-e3bb-4a7f-92c6-37e6f2b66103 req-82896e17-618e-4f73-b7c4-e329f33e45e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Refreshing network info cache for port 31c187f6-645a-4415-a7a2-7c358adeb7c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.428 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Start _get_guest_xml network_info=[{"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:03:35Z,direct_url=<?>,disk_format='qcow2',id=525193b7-cb5a-4d63-9747-3b917622bbe3,min_disk=0,min_ram=0,name='cirros',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:03:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.435 185484 WARNING nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.445 185484 DEBUG nova.virt.libvirt.host [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.446 185484 DEBUG nova.virt.libvirt.host [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.458 185484 DEBUG nova.virt.libvirt.host [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.459 185484 DEBUG nova.virt.libvirt.host [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.459 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.460 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:03:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc7c8c58-0a2b-4396-9f89-7ff8e35afa36',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:03:35Z,direct_url=<?>,disk_format='qcow2',id=525193b7-cb5a-4d63-9747-3b917622bbe3,min_disk=0,min_ram=0,name='cirros',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:03:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.460 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.461 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.461 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.462 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.462 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.462 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.463 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.463 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.463 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.464 185484 DEBUG nova.virt.hardware [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.467 185484 DEBUG nova.virt.libvirt.vif [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:11:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf',id=4,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-jdtvuyp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:11:35Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjc1MzIxOTU4MjkzNjMyODI4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 19:11:41 compute-0 nova_compute[185480]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjc1MzIxOTU4MjkzNjMyODI4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=92ff85a4-5620-4dd0-8930-62b7f561edf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.468 185484 DEBUG nova.network.os_vif_util [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.468 185484 DEBUG nova.network.os_vif_util [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:04:d7,bridge_name='br-int',has_traffic_filtering=True,id=31c187f6-645a-4415-a7a2-7c358adeb7c3,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31c187f6-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.469 185484 DEBUG nova.objects.instance [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92ff85a4-5620-4dd0-8930-62b7f561edf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.489 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <uuid>92ff85a4-5620-4dd0-8930-62b7f561edf6</uuid>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <name>instance-00000004</name>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <memory>524288</memory>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <nova:name>vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf</nova:name>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:11:41</nova:creationTime>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <nova:flavor name="m1.small">
Jan 27 19:11:41 compute-0 nova_compute[185480]:         <nova:memory>512</nova:memory>
Jan 27 19:11:41 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:11:41 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:11:41 compute-0 nova_compute[185480]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 19:11:41 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:11:41 compute-0 nova_compute[185480]:         <nova:user uuid="6d30d46dc88a4403b3a241949384d8f7">admin</nova:user>
Jan 27 19:11:41 compute-0 nova_compute[185480]:         <nova:project uuid="f04ec1493db14ca1adbb4b6abd1667b1">admin</nova:project>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="525193b7-cb5a-4d63-9747-3b917622bbe3"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:11:41 compute-0 nova_compute[185480]:         <nova:port uuid="31c187f6-645a-4415-a7a2-7c358adeb7c3">
Jan 27 19:11:41 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="192.168.0.191" ipVersion="4"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <system>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <entry name="serial">92ff85a4-5620-4dd0-8930-62b7f561edf6</entry>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <entry name="uuid">92ff85a4-5620-4dd0-8930-62b7f561edf6</entry>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </system>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <os>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   </os>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <features>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   </features>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <target dev="vdb" bus="virtio"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.config"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:dc:04:d7"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <target dev="tap31c187f6-64"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/console.log" append="off"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <video>
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </video>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:11:41 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:11:41 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:11:41 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:11:41 compute-0 nova_compute[185480]: </domain>
Jan 27 19:11:41 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.491 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Preparing to wait for external event network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.491 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.491 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.492 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.492 185484 DEBUG nova.virt.libvirt.vif [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:11:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf',id=4,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-jdtvuyp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:11:35Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjc1MzIxOTU4MjkzNjMyODI4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 27 19:11:41 compute-0 nova_compute[185480]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjc1MzIxOTU4MjkzNjMyODI4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=92ff85a4-5620-4dd0-8930-62b7f561edf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.493 185484 DEBUG nova.network.os_vif_util [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.493 185484 DEBUG nova.network.os_vif_util [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:04:d7,bridge_name='br-int',has_traffic_filtering=True,id=31c187f6-645a-4415-a7a2-7c358adeb7c3,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31c187f6-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.493 185484 DEBUG os_vif [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:04:d7,bridge_name='br-int',has_traffic_filtering=True,id=31c187f6-645a-4415-a7a2-7c358adeb7c3,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31c187f6-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.494 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.495 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.495 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.499 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.499 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31c187f6-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.500 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31c187f6-64, col_values=(('external_ids', {'iface-id': '31c187f6-645a-4415-a7a2-7c358adeb7c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:04:d7', 'vm-uuid': '92ff85a4-5620-4dd0-8930-62b7f561edf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.501 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:41 compute-0 NetworkManager[56191]: <info>  [1769541101.5030] manager: (tap31c187f6-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.504 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.511 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.512 185484 INFO os_vif [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:04:d7,bridge_name='br-int',has_traffic_filtering=True,id=31c187f6-645a-4415-a7a2-7c358adeb7c3,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31c187f6-64')
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.570 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.572 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.572 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.572 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No VIF found with MAC fa:16:3e:dc:04:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.573 185484 INFO nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Using config drive
Jan 27 19:11:41 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:11:41.467 185484 DEBUG nova.virt.libvirt.vif [None req-ea34761b-83 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:11:41 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:11:41.492 185484 DEBUG nova.virt.libvirt.vif [None req-ea34761b-83 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.955 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updating instance_info_cache with network_info: [{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.972 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:11:41 compute-0 nova_compute[185480]: 2026-01-27 19:11:41.972 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.078 185484 INFO nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Creating config drive at /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.config
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.086 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps495xp06 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.213 185484 DEBUG oslo_concurrency.processutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps495xp06" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:42 compute-0 kernel: tap31c187f6-64: entered promiscuous mode
Jan 27 19:11:42 compute-0 NetworkManager[56191]: <info>  [1769541102.2830] manager: (tap31c187f6-64): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 27 19:11:42 compute-0 ovn_controller[97647]: 2026-01-27T19:11:42Z|00045|binding|INFO|Claiming lport 31c187f6-645a-4415-a7a2-7c358adeb7c3 for this chassis.
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.282 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:42 compute-0 ovn_controller[97647]: 2026-01-27T19:11:42Z|00046|binding|INFO|31c187f6-645a-4415-a7a2-7c358adeb7c3: Claiming fa:16:3e:dc:04:d7 192.168.0.191
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.287 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.298 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:04:d7 192.168.0.191'], port_security=['fa:16:3e:dc:04:d7 192.168.0.191'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-z6txynvcwoqi-6l35xl64nude-6efi37qfmun5-port-ynxgvvttwcdb', 'neutron:cidrs': '192.168.0.191/24', 'neutron:device_id': '92ff85a4-5620-4dd0-8930-62b7f561edf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-z6txynvcwoqi-6l35xl64nude-6efi37qfmun5-port-ynxgvvttwcdb', 'neutron:project_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a99badb-bb64-4e2f-95a8-78f317eb6676', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33ebbec8-56f4-45ac-84a6-f80dd4a7c167, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=31c187f6-645a-4415-a7a2-7c358adeb7c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.299 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 31c187f6-645a-4415-a7a2-7c358adeb7c3 in datapath 4f32262d-dee8-406b-8a5a-09e95f48c8d5 bound to our chassis
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.301 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.309 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.315 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:42 compute-0 ovn_controller[97647]: 2026-01-27T19:11:42Z|00047|binding|INFO|Setting lport 31c187f6-645a-4415-a7a2-7c358adeb7c3 up in Southbound
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.322 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b01433-329a-4c76-b4a9-9dc07b79ab61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:11:42 compute-0 systemd-udevd[241422]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:11:42 compute-0 ovn_controller[97647]: 2026-01-27T19:11:42Z|00048|binding|INFO|Setting lport 31c187f6-645a-4415-a7a2-7c358adeb7c3 ovn-installed in OVS
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.328 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:42 compute-0 systemd-machined[156762]: New machine qemu-4-instance-00000004.
Jan 27 19:11:42 compute-0 NetworkManager[56191]: <info>  [1769541102.3410] device (tap31c187f6-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:11:42 compute-0 NetworkManager[56191]: <info>  [1769541102.3416] device (tap31c187f6-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:11:42 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.356 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[2649a457-c14a-400b-b1eb-62e1c025ab77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.359 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[ed183e9d-1cdd-4586-9b22-7078ce4d9b85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.390 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[303ed257-b25e-4505-990d-55c26c2407e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.407 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[eafe19ef-1af6-4e52-9528-8b08ac877db0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f32262d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:bd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383551, 'reachable_time': 43795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241433, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.426 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[8b120de7-d57b-41f2-8f61-2c4791ebbb81]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383562, 'tstamp': 383562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241436, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383566, 'tstamp': 383566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241436, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.427 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f32262d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.428 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.430 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.430 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f32262d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.431 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.431 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f32262d-d0, col_values=(('external_ids', {'iface-id': '5950ebf0-6d13-4405-b07d-fec152665bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:11:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:11:42.431 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.550 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.551 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.551 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.552 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.644 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.680 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:42 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.744 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.745 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:42 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.814 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.816 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.837 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769541102.8143203, 92ff85a4-5620-4dd0-8930-62b7f561edf6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.837 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] VM Started (Lifecycle Event)
Jan 27 19:11:42 compute-0 podman[241444]: 2026-01-27 19:11:42.848314777 +0000 UTC m=+0.116917926 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_id=kepler, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vcs-type=git, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543)
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.868 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.879 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769541102.8146107, 92ff85a4-5620-4dd0-8930-62b7f561edf6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.880 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] VM Paused (Lifecycle Event)
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.893 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.894 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.922 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.929 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.957 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.965 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.974 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.995 185484 DEBUG nova.compute.manager [req-c74da91b-afd7-466d-ae76-8504f01e8128 req-5987fbde-6047-4a90-acda-299fcbb8178e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received event network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.995 185484 DEBUG oslo_concurrency.lockutils [req-c74da91b-afd7-466d-ae76-8504f01e8128 req-5987fbde-6047-4a90-acda-299fcbb8178e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.996 185484 DEBUG oslo_concurrency.lockutils [req-c74da91b-afd7-466d-ae76-8504f01e8128 req-5987fbde-6047-4a90-acda-299fcbb8178e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.996 185484 DEBUG oslo_concurrency.lockutils [req-c74da91b-afd7-466d-ae76-8504f01e8128 req-5987fbde-6047-4a90-acda-299fcbb8178e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.997 185484 DEBUG nova.compute.manager [req-c74da91b-afd7-466d-ae76-8504f01e8128 req-5987fbde-6047-4a90-acda-299fcbb8178e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Processing event network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:11:42 compute-0 nova_compute[185480]: 2026-01-27 19:11:42.998 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.004 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769541103.0038126, 92ff85a4-5620-4dd0-8930-62b7f561edf6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.004 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] VM Resumed (Lifecycle Event)
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.007 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.013 185484 INFO nova.virt.libvirt.driver [-] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Instance spawned successfully.
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.013 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.021 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.029 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.043 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.044 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.045 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.045 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.046 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.047 185484 DEBUG nova.virt.libvirt.driver [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.051 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.051 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.052 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.115 185484 INFO nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Took 8.03 seconds to spawn the instance on the hypervisor.
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.116 185484 DEBUG nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.139 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.139 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.194 185484 INFO nova.compute.manager [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Took 8.57 seconds to build instance.
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.196 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.197 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.242 185484 DEBUG oslo_concurrency.lockutils [None req-ea34761b-8321-403a-bfe2-f9783cb437e7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.266 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.273 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.330 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.332 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.393 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.394 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.454 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.454 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.513 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.524 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.585 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.586 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.656 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.657 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.743 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.744 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.809 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.918 185484 DEBUG nova.network.neutron [req-544d4fa1-e3bb-4a7f-92c6-37e6f2b66103 req-82896e17-618e-4f73-b7c4-e329f33e45e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updated VIF entry in instance network info cache for port 31c187f6-645a-4415-a7a2-7c358adeb7c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.920 185484 DEBUG nova.network.neutron [req-544d4fa1-e3bb-4a7f-92c6-37e6f2b66103 req-82896e17-618e-4f73-b7c4-e329f33e45e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updating instance_info_cache with network_info: [{"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:11:43 compute-0 nova_compute[185480]: 2026-01-27 19:11:43.942 185484 DEBUG oslo_concurrency.lockutils [req-544d4fa1-e3bb-4a7f-92c6-37e6f2b66103 req-82896e17-618e-4f73-b7c4-e329f33e45e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.310 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.311 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4875MB free_disk=72.37560272216797GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.312 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.312 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.391 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.392 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.392 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.393 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.393 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.394 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.482 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.515 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.551 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:11:44 compute-0 nova_compute[185480]: 2026-01-27 19:11:44.552 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.137 185484 DEBUG nova.compute.manager [req-b2312074-9b1e-4213-a4d5-51d710209fce req-1be6ffe5-e4a2-48fc-9ebe-e16da01f00d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received event network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.138 185484 DEBUG oslo_concurrency.lockutils [req-b2312074-9b1e-4213-a4d5-51d710209fce req-1be6ffe5-e4a2-48fc-9ebe-e16da01f00d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.138 185484 DEBUG oslo_concurrency.lockutils [req-b2312074-9b1e-4213-a4d5-51d710209fce req-1be6ffe5-e4a2-48fc-9ebe-e16da01f00d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.139 185484 DEBUG oslo_concurrency.lockutils [req-b2312074-9b1e-4213-a4d5-51d710209fce req-1be6ffe5-e4a2-48fc-9ebe-e16da01f00d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.139 185484 DEBUG nova.compute.manager [req-b2312074-9b1e-4213-a4d5-51d710209fce req-1be6ffe5-e4a2-48fc-9ebe-e16da01f00d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] No waiting events found dispatching network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.140 185484 WARNING nova.compute.manager [req-b2312074-9b1e-4213-a4d5-51d710209fce req-1be6ffe5-e4a2-48fc-9ebe-e16da01f00d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received unexpected event network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 for instance with vm_state active and task_state None.
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.551 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.552 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.552 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:45 compute-0 nova_compute[185480]: 2026-01-27 19:11:45.553 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:11:46 compute-0 nova_compute[185480]: 2026-01-27 19:11:46.504 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:46 compute-0 nova_compute[185480]: 2026-01-27 19:11:46.509 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:47 compute-0 podman[241534]: 2026-01-27 19:11:47.333570592 +0000 UTC m=+0.117263265 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, config_id=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7)
Jan 27 19:11:47 compute-0 nova_compute[185480]: 2026-01-27 19:11:47.646 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:48 compute-0 nova_compute[185480]: 2026-01-27 19:11:48.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:49 compute-0 nova_compute[185480]: 2026-01-27 19:11:49.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:49 compute-0 nova_compute[185480]: 2026-01-27 19:11:49.572 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:50 compute-0 nova_compute[185480]: 2026-01-27 19:11:50.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:11:51 compute-0 nova_compute[185480]: 2026-01-27 19:11:51.508 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:52 compute-0 nova_compute[185480]: 2026-01-27 19:11:52.649 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:54 compute-0 podman[241557]: 2026-01-27 19:11:54.305974118 +0000 UTC m=+0.077746128 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 27 19:11:56 compute-0 nova_compute[185480]: 2026-01-27 19:11:56.511 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:57 compute-0 podman[241579]: 2026-01-27 19:11:57.305035301 +0000 UTC m=+0.076724603 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 19:11:57 compute-0 podman[241577]: 2026-01-27 19:11:57.307347048 +0000 UTC m=+0.087850065 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:11:57 compute-0 podman[241578]: 2026-01-27 19:11:57.34597294 +0000 UTC m=+0.122783157 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 27 19:11:57 compute-0 nova_compute[185480]: 2026-01-27 19:11:57.653 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:11:59 compute-0 podman[201378]: time="2026-01-27T19:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:11:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:11:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 27 19:12:01 compute-0 openstack_network_exporter[204477]: ERROR   19:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:12:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:12:01 compute-0 openstack_network_exporter[204477]: ERROR   19:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:12:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:12:01 compute-0 nova_compute[185480]: 2026-01-27 19:12:01.514 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:02 compute-0 nova_compute[185480]: 2026-01-27 19:12:02.656 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:05 compute-0 sshd-session[241641]: Invalid user sol from 45.148.10.240 port 58458
Jan 27 19:12:05 compute-0 sshd-session[241641]: Connection closed by invalid user sol 45.148.10.240 port 58458 [preauth]
Jan 27 19:12:06 compute-0 nova_compute[185480]: 2026-01-27 19:12:06.517 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:07 compute-0 nova_compute[185480]: 2026-01-27 19:12:07.659 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:11 compute-0 podman[241643]: 2026-01-27 19:12:11.342595033 +0000 UTC m=+0.114624318 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:12:11 compute-0 podman[241644]: 2026-01-27 19:12:11.345555416 +0000 UTC m=+0.102254407 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 19:12:11 compute-0 nova_compute[185480]: 2026-01-27 19:12:11.520 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:12 compute-0 ovn_controller[97647]: 2026-01-27T19:12:12Z|00049|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Jan 27 19:12:12 compute-0 nova_compute[185480]: 2026-01-27 19:12:12.661 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:13 compute-0 podman[241685]: 2026-01-27 19:12:13.297911064 +0000 UTC m=+0.071936637 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, release=1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9)
Jan 27 19:12:15 compute-0 ovn_controller[97647]: 2026-01-27T19:12:15Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:04:d7 192.168.0.191
Jan 27 19:12:15 compute-0 ovn_controller[97647]: 2026-01-27T19:12:15Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:04:d7 192.168.0.191
Jan 27 19:12:16 compute-0 nova_compute[185480]: 2026-01-27 19:12:16.523 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:17 compute-0 nova_compute[185480]: 2026-01-27 19:12:17.663 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:18 compute-0 podman[241719]: 2026-01-27 19:12:18.319444506 +0000 UTC m=+0.088122052 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Jan 27 19:12:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:12:20.517 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:12:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:12:20.518 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:12:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:12:20.518 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:12:21 compute-0 nova_compute[185480]: 2026-01-27 19:12:21.527 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:23 compute-0 nova_compute[185480]: 2026-01-27 19:12:23.011 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:25 compute-0 podman[241741]: 2026-01-27 19:12:25.357048341 +0000 UTC m=+0.121284010 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0)
Jan 27 19:12:26 compute-0 nova_compute[185480]: 2026-01-27 19:12:26.529 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:27 compute-0 nova_compute[185480]: 2026-01-27 19:12:27.668 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:28 compute-0 podman[241759]: 2026-01-27 19:12:28.319310823 +0000 UTC m=+0.093991415 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:12:28 compute-0 podman[241761]: 2026-01-27 19:12:28.346591529 +0000 UTC m=+0.110510538 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:12:28 compute-0 podman[241760]: 2026-01-27 19:12:28.359641388 +0000 UTC m=+0.123183378 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 19:12:29 compute-0 podman[201378]: time="2026-01-27T19:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:12:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:12:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 27 19:12:31 compute-0 openstack_network_exporter[204477]: ERROR   19:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:12:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:12:31 compute-0 openstack_network_exporter[204477]: ERROR   19:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:12:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:12:31 compute-0 nova_compute[185480]: 2026-01-27 19:12:31.532 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.096 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.096 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.105 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'name': 'test_0', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.109 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.109 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/92ff85a4-5620-4dd0-8930-62b7f561edf6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:12:32 compute-0 nova_compute[185480]: 2026-01-27 19:12:32.671 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.853 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 27 Jan 2026 19:12:32 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-23d2314e-ec09-4f7f-944f-27499b3be9c7 x-openstack-request-id: req-23d2314e-ec09-4f7f-944f-27499b3be9c7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.853 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "92ff85a4-5620-4dd0-8930-62b7f561edf6", "name": "vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf", "status": "ACTIVE", "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "user_id": "6d30d46dc88a4403b3a241949384d8f7", "metadata": {"metering.server_group": "bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871"}, "hostId": "d5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0", "image": {"id": "525193b7-cb5a-4d63-9747-3b917622bbe3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/525193b7-cb5a-4d63-9747-3b917622bbe3"}]}, "flavor": {"id": "bc7c8c58-0a2b-4396-9f89-7ff8e35afa36", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc7c8c58-0a2b-4396-9f89-7ff8e35afa36"}]}, "created": "2026-01-27T19:11:33Z", "updated": "2026-01-27T19:11:43Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.191", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:dc:04:d7"}, {"version": 4, "addr": "192.168.122.228", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:dc:04:d7"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/92ff85a4-5620-4dd0-8930-62b7f561edf6"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/92ff85a4-5620-4dd0-8930-62b7f561edf6"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T19:11:43.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.854 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/92ff85a4-5620-4dd0-8930-62b7f561edf6 used request id req-23d2314e-ec09-4f7f-944f-27499b3be9c7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.855 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '92ff85a4-5620-4dd0-8930-62b7f561edf6', 'name': 'vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.859 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41f46cfd-06bf-4ef6-85a3-cc6e8629637e', 'name': 'vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.861 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '04920b61-96ec-47fc-9d6d-dfdb491e0e77', 'name': 'vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.862 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.862 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:12:32.862249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.889 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 21766144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.889 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.890 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.912 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.912 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.912 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.934 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.935 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.935 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.960 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.960 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.961 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.963 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.963 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:12:32.963405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:32.981 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.003 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/memory.usage volume: 49.7265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.024 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/memory.usage volume: 48.96875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.043 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/memory.usage volume: 49.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.045 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.045 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.045 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:12:33.045622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.046 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.046 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.047 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:12:33.047293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.050 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.052 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 92ff85a4-5620-4dd0-8930-62b7f561edf6 / tap31c187f6-64 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.052 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.055 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes.delta volume: 3389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.058 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:12:33.058888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.112 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.112 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.112 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.168 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.169 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.169 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.231 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.231 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.232 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.311 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.312 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.312 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.313 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.314 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.314 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:12:33.314347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.315 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets volume: 53 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.316 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.317 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.317 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.317 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.318 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.319 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:12:33.317357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.323 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:12:33.323320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.324 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.325 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.326 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.326 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.327 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.328 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.328 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.329 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.329 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.330 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.330 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.332 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:12:33.332398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.332 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/cpu volume: 37540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.333 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/cpu volume: 32480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.333 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/cpu volume: 264080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.334 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/cpu volume: 35290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.335 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:12:33.335328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.336 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.336 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.336 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.337 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.337 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.338 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.338 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.338 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.339 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.339 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.339 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.341 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.342 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.342 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.342 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.343 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:12:33.341716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.344 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.345 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 713872381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.346 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 102610265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.346 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 79720785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.347 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 724384888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:12:33.344920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.347 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 110122219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.348 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 191687644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.348 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 746276888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.348 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 98242096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.349 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 91644949 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.349 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 623753590 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.349 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 105958430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.350 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 83308410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.351 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.352 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.352 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.352 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.353 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.353 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.354 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.354 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.355 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.356 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.356 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets volume: 59 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.357 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:12:33.352063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:12:33.355543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.359 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.359 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.359 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:12:33.359017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.360 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.360 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.361 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.361 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.362 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.362 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.362 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.363 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.363 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.364 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:12:33.365218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.365 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.366 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.366 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes volume: 7130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.366 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.367 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.368 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 2565149834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.368 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 13830550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.368 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.369 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 2484413665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.369 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 10920156 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:12:33.368128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.369 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.370 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 1318160178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.370 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 12613240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.370 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.370 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 1287563622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.371 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 10814945 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.371 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.372 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.372 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.372 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.372 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes.delta volume: 2502 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.373 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.bytes.delta volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.373 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.374 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.374 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.374 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.375 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.375 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:12:33.372321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.375 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.375 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.376 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:12:33.374228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.376 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.376 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.376 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.377 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.377 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.379 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.379 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.379 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.379 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.379 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:12:33.378134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T19:12:33.379408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.379 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf>]
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.380 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.380 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.381 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.381 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:12:33.380489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.383 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.383 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.384 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:12:33.382455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:12:33.383722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.384 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.384 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.384 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.385 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.385 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.385 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.386 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.386 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.386 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.386 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.387 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.387 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes volume: 2088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.388 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.388 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes volume: 8322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.389 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:12:33.387838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.389 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.389 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.390 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.390 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.390 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf>]
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T19:12:33.390061) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:12:33.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:12:36 compute-0 nova_compute[185480]: 2026-01-27 19:12:36.537 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:37 compute-0 nova_compute[185480]: 2026-01-27 19:12:37.674 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:39 compute-0 nova_compute[185480]: 2026-01-27 19:12:39.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:39 compute-0 nova_compute[185480]: 2026-01-27 19:12:39.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:12:39 compute-0 nova_compute[185480]: 2026-01-27 19:12:39.775 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:12:39 compute-0 nova_compute[185480]: 2026-01-27 19:12:39.775 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:12:39 compute-0 nova_compute[185480]: 2026-01-27 19:12:39.776 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:12:41 compute-0 nova_compute[185480]: 2026-01-27 19:12:41.362 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updating instance_info_cache with network_info: [{"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:12:41 compute-0 nova_compute[185480]: 2026-01-27 19:12:41.386 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:12:41 compute-0 nova_compute[185480]: 2026-01-27 19:12:41.386 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:12:41 compute-0 nova_compute[185480]: 2026-01-27 19:12:41.540 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:42 compute-0 podman[241829]: 2026-01-27 19:12:42.343114191 +0000 UTC m=+0.109132254 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 19:12:42 compute-0 podman[241828]: 2026-01-27 19:12:42.351750322 +0000 UTC m=+0.117968980 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:12:42 compute-0 nova_compute[185480]: 2026-01-27 19:12:42.678 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:43 compute-0 nova_compute[185480]: 2026-01-27 19:12:43.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:44 compute-0 podman[241872]: 2026-01-27 19:12:44.29836599 +0000 UTC m=+0.081604093 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, name=ubi9)
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.543 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.544 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.545 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.546 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.676 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.757 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.759 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.823 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.826 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.888 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.890 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.965 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:44 compute-0 nova_compute[185480]: 2026-01-27 19:12:44.978 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.040 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.041 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.099 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.101 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.167 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.169 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.266 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.279 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.373 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.375 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.476 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.478 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.541 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.543 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.607 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.613 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.674 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.675 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.735 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.736 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.794 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.794 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:12:45 compute-0 nova_compute[185480]: 2026-01-27 19:12:45.867 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.245 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.246 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4681MB free_disk=72.35468292236328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.247 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.247 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.329 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.329 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.330 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.330 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.330 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.330 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.449 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.465 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.466 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.466 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:12:46 compute-0 nova_compute[185480]: 2026-01-27 19:12:46.542 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:47 compute-0 nova_compute[185480]: 2026-01-27 19:12:47.680 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:48 compute-0 nova_compute[185480]: 2026-01-27 19:12:48.467 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:48 compute-0 nova_compute[185480]: 2026-01-27 19:12:48.467 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:48 compute-0 nova_compute[185480]: 2026-01-27 19:12:48.467 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:48 compute-0 nova_compute[185480]: 2026-01-27 19:12:48.467 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:12:49 compute-0 podman[241940]: 2026-01-27 19:12:49.31775852 +0000 UTC m=+0.095583383 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350)
Jan 27 19:12:49 compute-0 nova_compute[185480]: 2026-01-27 19:12:49.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:50 compute-0 nova_compute[185480]: 2026-01-27 19:12:50.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:51 compute-0 nova_compute[185480]: 2026-01-27 19:12:51.544 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:52 compute-0 nova_compute[185480]: 2026-01-27 19:12:52.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:12:52 compute-0 nova_compute[185480]: 2026-01-27 19:12:52.683 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:56 compute-0 podman[241963]: 2026-01-27 19:12:56.357681302 +0000 UTC m=+0.127713098 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260126, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute)
Jan 27 19:12:56 compute-0 nova_compute[185480]: 2026-01-27 19:12:56.548 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:57 compute-0 nova_compute[185480]: 2026-01-27 19:12:57.687 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:12:59 compute-0 podman[241983]: 2026-01-27 19:12:59.327145243 +0000 UTC m=+0.089865924 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:12:59 compute-0 podman[241985]: 2026-01-27 19:12:59.349470538 +0000 UTC m=+0.111910352 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 19:12:59 compute-0 podman[241984]: 2026-01-27 19:12:59.418757549 +0000 UTC m=+0.173326511 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 19:12:59 compute-0 podman[201378]: time="2026-01-27T19:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:12:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:12:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4369 "" "Go-http-client/1.1"
Jan 27 19:13:01 compute-0 openstack_network_exporter[204477]: ERROR   19:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:13:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:13:01 compute-0 openstack_network_exporter[204477]: ERROR   19:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:13:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:13:01 compute-0 nova_compute[185480]: 2026-01-27 19:13:01.550 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:02 compute-0 nova_compute[185480]: 2026-01-27 19:13:02.689 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:06 compute-0 nova_compute[185480]: 2026-01-27 19:13:06.554 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:07 compute-0 nova_compute[185480]: 2026-01-27 19:13:07.692 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:11 compute-0 nova_compute[185480]: 2026-01-27 19:13:11.556 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:12 compute-0 nova_compute[185480]: 2026-01-27 19:13:12.694 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:13 compute-0 podman[242048]: 2026-01-27 19:13:13.330418599 +0000 UTC m=+0.097624354 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 19:13:13 compute-0 podman[242047]: 2026-01-27 19:13:13.355579993 +0000 UTC m=+0.110678562 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:13:14 compute-0 podman[242090]: 2026-01-27 19:13:14.784235559 +0000 UTC m=+0.099236343 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, name=ubi9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=)
Jan 27 19:13:16 compute-0 nova_compute[185480]: 2026-01-27 19:13:16.562 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:17 compute-0 nova_compute[185480]: 2026-01-27 19:13:17.698 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:20 compute-0 podman[242110]: 2026-01-27 19:13:20.305582057 +0000 UTC m=+0.077950453 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Jan 27 19:13:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:13:20.518 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:13:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:13:20.519 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:13:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:13:20.519 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:13:21 compute-0 nova_compute[185480]: 2026-01-27 19:13:21.566 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:21 compute-0 sshd-session[242129]: Received disconnect from 45.148.10.151 port 56342:11:  [preauth]
Jan 27 19:13:21 compute-0 sshd-session[242129]: Disconnected from authenticating user root 45.148.10.151 port 56342 [preauth]
Jan 27 19:13:22 compute-0 nova_compute[185480]: 2026-01-27 19:13:22.700 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:26 compute-0 nova_compute[185480]: 2026-01-27 19:13:26.570 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:27 compute-0 podman[242131]: 2026-01-27 19:13:27.316109311 +0000 UTC m=+0.091038263 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 19:13:27 compute-0 nova_compute[185480]: 2026-01-27 19:13:27.707 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:29 compute-0 podman[201378]: time="2026-01-27T19:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:13:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:13:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 27 19:13:30 compute-0 podman[242151]: 2026-01-27 19:13:30.322640996 +0000 UTC m=+0.099578671 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:13:30 compute-0 podman[242152]: 2026-01-27 19:13:30.354317589 +0000 UTC m=+0.123676419 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:13:30 compute-0 podman[242153]: 2026-01-27 19:13:30.354722309 +0000 UTC m=+0.118168045 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 19:13:31 compute-0 openstack_network_exporter[204477]: ERROR   19:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:13:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:13:31 compute-0 openstack_network_exporter[204477]: ERROR   19:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:13:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:13:31 compute-0 nova_compute[185480]: 2026-01-27 19:13:31.573 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:32 compute-0 nova_compute[185480]: 2026-01-27 19:13:32.707 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:36 compute-0 nova_compute[185480]: 2026-01-27 19:13:36.575 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:37 compute-0 nova_compute[185480]: 2026-01-27 19:13:37.711 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:39 compute-0 nova_compute[185480]: 2026-01-27 19:13:39.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:39 compute-0 nova_compute[185480]: 2026-01-27 19:13:39.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:13:39 compute-0 nova_compute[185480]: 2026-01-27 19:13:39.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:13:39 compute-0 nova_compute[185480]: 2026-01-27 19:13:39.837 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:13:39 compute-0 nova_compute[185480]: 2026-01-27 19:13:39.837 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:13:39 compute-0 nova_compute[185480]: 2026-01-27 19:13:39.838 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:13:39 compute-0 nova_compute[185480]: 2026-01-27 19:13:39.838 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:13:40 compute-0 nova_compute[185480]: 2026-01-27 19:13:40.867 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:13:40 compute-0 nova_compute[185480]: 2026-01-27 19:13:40.883 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:13:40 compute-0 nova_compute[185480]: 2026-01-27 19:13:40.884 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:13:41 compute-0 nova_compute[185480]: 2026-01-27 19:13:41.578 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:42 compute-0 nova_compute[185480]: 2026-01-27 19:13:42.713 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:42 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 19:13:43 compute-0 sshd-session[242217]: Invalid user janie from 213.209.159.159 port 34523
Jan 27 19:13:43 compute-0 podman[242219]: 2026-01-27 19:13:43.703314546 +0000 UTC m=+0.111030121 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:13:43 compute-0 podman[242220]: 2026-01-27 19:13:43.720986367 +0000 UTC m=+0.135498628 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 27 19:13:44 compute-0 sshd-session[242217]: Received disconnect from 213.209.159.159 port 34523:11: Bye [preauth]
Jan 27 19:13:44 compute-0 sshd-session[242217]: Disconnected from invalid user janie 213.209.159.159 port 34523 [preauth]
Jan 27 19:13:45 compute-0 podman[242263]: 2026-01-27 19:13:45.345495053 +0000 UTC m=+0.102122474 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=kepler, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Jan 27 19:13:45 compute-0 nova_compute[185480]: 2026-01-27 19:13:45.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.514 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.543 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.544 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.545 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.545 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.581 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.635 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.704 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.706 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.761 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.763 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.853 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.854 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.945 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:46 compute-0 nova_compute[185480]: 2026-01-27 19:13:46.954 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.017 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.018 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.099 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.101 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.160 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.162 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.225 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.238 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.315 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.316 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.386 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.388 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.458 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.459 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.528 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.537 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.611 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.612 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.684 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.685 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.715 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.754 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.755 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:13:47 compute-0 nova_compute[185480]: 2026-01-27 19:13:47.836 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.227 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.228 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4608MB free_disk=72.35468292236328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.228 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.229 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.318 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.318 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.318 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.319 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.319 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.319 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.425 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.441 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.444 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:13:48 compute-0 nova_compute[185480]: 2026-01-27 19:13:48.444 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:13:50 compute-0 nova_compute[185480]: 2026-01-27 19:13:50.439 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:50 compute-0 nova_compute[185480]: 2026-01-27 19:13:50.440 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:50 compute-0 nova_compute[185480]: 2026-01-27 19:13:50.440 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:50 compute-0 nova_compute[185480]: 2026-01-27 19:13:50.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:51 compute-0 podman[242330]: 2026-01-27 19:13:51.303866022 +0000 UTC m=+0.075001442 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 19:13:51 compute-0 nova_compute[185480]: 2026-01-27 19:13:51.586 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:52 compute-0 nova_compute[185480]: 2026-01-27 19:13:52.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:52 compute-0 nova_compute[185480]: 2026-01-27 19:13:52.716 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:53 compute-0 nova_compute[185480]: 2026-01-27 19:13:53.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:13:56 compute-0 nova_compute[185480]: 2026-01-27 19:13:56.590 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:57 compute-0 nova_compute[185480]: 2026-01-27 19:13:57.719 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:13:58 compute-0 podman[242348]: 2026-01-27 19:13:58.392830943 +0000 UTC m=+0.155573248 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 27 19:13:59 compute-0 podman[201378]: time="2026-01-27T19:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:13:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:13:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 27 19:14:01 compute-0 podman[242369]: 2026-01-27 19:14:01.301823707 +0000 UTC m=+0.078288171 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:14:01 compute-0 podman[242371]: 2026-01-27 19:14:01.314651041 +0000 UTC m=+0.085088958 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:14:01 compute-0 podman[242370]: 2026-01-27 19:14:01.380978239 +0000 UTC m=+0.140991492 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:14:01 compute-0 openstack_network_exporter[204477]: ERROR   19:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:14:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:14:01 compute-0 openstack_network_exporter[204477]: ERROR   19:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:14:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:14:01 compute-0 nova_compute[185480]: 2026-01-27 19:14:01.593 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:02 compute-0 nova_compute[185480]: 2026-01-27 19:14:02.722 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:06 compute-0 nova_compute[185480]: 2026-01-27 19:14:06.596 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:07 compute-0 nova_compute[185480]: 2026-01-27 19:14:07.724 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:11 compute-0 nova_compute[185480]: 2026-01-27 19:14:11.600 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:12 compute-0 nova_compute[185480]: 2026-01-27 19:14:12.726 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:14 compute-0 podman[242433]: 2026-01-27 19:14:14.298339752 +0000 UTC m=+0.072195153 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:14:14 compute-0 podman[242434]: 2026-01-27 19:14:14.320852512 +0000 UTC m=+0.088871451 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:14:16 compute-0 podman[242475]: 2026-01-27 19:14:16.338617455 +0000 UTC m=+0.108243472 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, release-0.7.12=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, config_id=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 19:14:16 compute-0 nova_compute[185480]: 2026-01-27 19:14:16.604 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:17 compute-0 nova_compute[185480]: 2026-01-27 19:14:17.728 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:19 compute-0 sshd-session[242496]: Invalid user sol from 45.148.10.240 port 43398
Jan 27 19:14:19 compute-0 sshd-session[242496]: Connection closed by invalid user sol 45.148.10.240 port 43398 [preauth]
Jan 27 19:14:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:14:20.520 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:14:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:14:20.522 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:14:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:14:20.523 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:14:21 compute-0 nova_compute[185480]: 2026-01-27 19:14:21.607 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:22 compute-0 podman[242499]: 2026-01-27 19:14:22.47355626 +0000 UTC m=+0.120688368 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 19:14:22 compute-0 nova_compute[185480]: 2026-01-27 19:14:22.730 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:26 compute-0 nova_compute[185480]: 2026-01-27 19:14:26.610 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:27 compute-0 nova_compute[185480]: 2026-01-27 19:14:27.733 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:29 compute-0 podman[242520]: 2026-01-27 19:14:29.323007038 +0000 UTC m=+0.081623904 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 19:14:29 compute-0 podman[201378]: time="2026-01-27T19:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:14:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:14:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 27 19:14:31 compute-0 openstack_network_exporter[204477]: ERROR   19:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:14:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:14:31 compute-0 openstack_network_exporter[204477]: ERROR   19:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:14:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:14:31 compute-0 nova_compute[185480]: 2026-01-27 19:14:31.614 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.096 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.097 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.105 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'name': 'test_0', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.110 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '92ff85a4-5620-4dd0-8930-62b7f561edf6', 'name': 'vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.114 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41f46cfd-06bf-4ef6-85a3-cc6e8629637e', 'name': 'vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.118 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '04920b61-96ec-47fc-9d6d-dfdb491e0e77', 'name': 'vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:14:32.119463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.144 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 21766144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.145 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.145 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.172 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.173 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.173 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.198 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.199 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.199 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.227 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.227 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.228 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.228 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.228 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.229 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.229 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.229 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:14:32.229193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.250 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.274 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/memory.usage volume: 49.109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.302 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/memory.usage volume: 48.96875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 podman[242544]: 2026-01-27 19:14:32.319903654 +0000 UTC m=+0.074728734 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:14:32 compute-0 podman[242542]: 2026-01-27 19:14:32.323465012 +0000 UTC m=+0.098905186 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.336 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/memory.usage volume: 49.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.337 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.337 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.337 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.338 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.338 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.338 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:14:32.337480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:14:32.339151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.343 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.346 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 podman[242543]: 2026-01-27 19:14:32.34964971 +0000 UTC m=+0.122403268 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.351 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.355 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.356 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:14:32.356403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.415 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.415 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.416 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.479 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.480 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.480 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.542 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.543 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.543 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.611 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.612 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.612 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.613 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.613 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.613 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.614 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets volume: 53 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:14:32.613327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.614 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.615 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.615 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.615 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.616 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:14:32.615400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.616 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.617 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.617 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.617 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.618 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.618 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:14:32.617314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.618 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.619 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.619 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.619 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.619 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.619 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.620 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.620 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.621 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.621 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/cpu volume: 38990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.621 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/cpu volume: 33990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.622 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/cpu volume: 265550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.622 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/cpu volume: 36760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:14:32.621581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:14:32.623499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.624 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.624 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.624 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.624 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.625 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.625 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.625 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.625 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.625 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.626 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.626 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.627 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.627 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.627 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:14:32.627098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.628 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.628 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.629 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 713872381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:14:32.628882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.629 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 102610265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.629 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 79720785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.629 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 724384888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.629 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 110122219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.630 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 191687644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.630 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 746276888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.630 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 98242096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.630 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.read.latency volume: 91644949 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.630 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 623753590 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.631 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 105958430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.631 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 83308410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.632 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.632 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.632 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.633 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:14:32.632187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.634 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.634 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.634 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets volume: 59 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.634 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:14:32.633952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.635 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.636 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:14:32.635933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.636 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.636 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.637 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.637 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.637 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.637 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.637 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.638 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.638 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.638 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.638 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.639 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:14:32.639891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.640 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.640 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.640 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes volume: 7130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.641 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.641 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.642 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 2565149834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:14:32.641876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.642 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 13830550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.642 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.642 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 2519550782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.643 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 10920156 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.643 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.643 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 1318160178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.643 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 12613240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.643 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.644 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 1287563622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.644 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 10814945 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.644 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.645 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.645 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.645 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes.delta volume: 422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.646 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.646 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.646 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.647 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.647 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.647 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:14:32.645417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.648 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:14:32.647108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.648 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.648 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.648 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.649 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.649 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.649 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.649 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.649 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.650 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.650 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:14:32.650986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.652 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.652 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.653 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:14:32.652821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.653 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.653 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.653 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.654 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.654 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:14:32.654622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.655 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.656 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.656 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.656 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:14:32.656000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.656 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.657 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.657 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.657 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.657 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.657 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.658 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.658 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.658 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.658 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.659 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.660 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes volume: 2088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.660 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.660 14 DEBUG ceilometer.compute.pollsters [-] 41f46cfd-06bf-4ef6-85a3-cc6e8629637e/network.incoming.bytes volume: 8322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.660 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:14:32.659922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.661 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:14:32.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:14:32 compute-0 nova_compute[185480]: 2026-01-27 19:14:32.734 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:36 compute-0 nova_compute[185480]: 2026-01-27 19:14:36.618 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:37 compute-0 nova_compute[185480]: 2026-01-27 19:14:37.736 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:40 compute-0 nova_compute[185480]: 2026-01-27 19:14:40.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:40 compute-0 nova_compute[185480]: 2026-01-27 19:14:40.517 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:14:40 compute-0 nova_compute[185480]: 2026-01-27 19:14:40.896 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:14:40 compute-0 nova_compute[185480]: 2026-01-27 19:14:40.897 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:14:40 compute-0 nova_compute[185480]: 2026-01-27 19:14:40.898 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:14:41 compute-0 nova_compute[185480]: 2026-01-27 19:14:41.620 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:42 compute-0 nova_compute[185480]: 2026-01-27 19:14:42.738 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:42 compute-0 nova_compute[185480]: 2026-01-27 19:14:42.946 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updating instance_info_cache with network_info: [{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:14:42 compute-0 nova_compute[185480]: 2026-01-27 19:14:42.977 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:14:42 compute-0 nova_compute[185480]: 2026-01-27 19:14:42.978 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:14:44 compute-0 podman[242606]: 2026-01-27 19:14:44.752317124 +0000 UTC m=+0.067556820 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:14:44 compute-0 podman[242607]: 2026-01-27 19:14:44.758729111 +0000 UTC m=+0.069291273 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 19:14:45 compute-0 nova_compute[185480]: 2026-01-27 19:14:45.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:46 compute-0 nova_compute[185480]: 2026-01-27 19:14:46.624 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:47 compute-0 podman[242643]: 2026-01-27 19:14:47.35403564 +0000 UTC m=+0.112805945 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.539 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.540 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.541 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.542 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.658 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.732 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.733 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.752 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.799 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.801 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.864 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.866 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.930 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:47 compute-0 nova_compute[185480]: 2026-01-27 19:14:47.937 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.012 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.013 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.084 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.085 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.152 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.153 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.214 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.220 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.299 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.301 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.359 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.360 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.423 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.424 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.488 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.498 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.559 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.560 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.649 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.650 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.732 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.732 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:14:48 compute-0 nova_compute[185480]: 2026-01-27 19:14:48.795 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.191 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.192 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=72.3547477722168GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.193 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.193 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.275 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.276 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.276 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.276 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.276 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.277 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.404 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.418 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.421 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:14:49 compute-0 nova_compute[185480]: 2026-01-27 19:14:49.421 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:14:50 compute-0 nova_compute[185480]: 2026-01-27 19:14:50.422 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:50 compute-0 nova_compute[185480]: 2026-01-27 19:14:50.422 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:14:50 compute-0 nova_compute[185480]: 2026-01-27 19:14:50.509 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:50 compute-0 nova_compute[185480]: 2026-01-27 19:14:50.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:51 compute-0 nova_compute[185480]: 2026-01-27 19:14:51.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:51 compute-0 nova_compute[185480]: 2026-01-27 19:14:51.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:51 compute-0 nova_compute[185480]: 2026-01-27 19:14:51.628 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:52 compute-0 nova_compute[185480]: 2026-01-27 19:14:52.744 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:53 compute-0 podman[242711]: 2026-01-27 19:14:53.309585036 +0000 UTC m=+0.089079506 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 19:14:53 compute-0 nova_compute[185480]: 2026-01-27 19:14:53.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:14:56 compute-0 nova_compute[185480]: 2026-01-27 19:14:56.632 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:57 compute-0 nova_compute[185480]: 2026-01-27 19:14:57.747 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:14:59 compute-0 podman[201378]: time="2026-01-27T19:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:14:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:14:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 27 19:15:00 compute-0 podman[242732]: 2026-01-27 19:15:00.31357394 +0000 UTC m=+0.079482200 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 27 19:15:01 compute-0 openstack_network_exporter[204477]: ERROR   19:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:15:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:15:01 compute-0 openstack_network_exporter[204477]: ERROR   19:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:15:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:15:01 compute-0 nova_compute[185480]: 2026-01-27 19:15:01.635 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:02 compute-0 nova_compute[185480]: 2026-01-27 19:15:02.749 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:03 compute-0 podman[242753]: 2026-01-27 19:15:03.328129191 +0000 UTC m=+0.077522772 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:15:03 compute-0 podman[242751]: 2026-01-27 19:15:03.355534881 +0000 UTC m=+0.110926528 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:15:03 compute-0 podman[242752]: 2026-01-27 19:15:03.383712108 +0000 UTC m=+0.134298389 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:15:06 compute-0 nova_compute[185480]: 2026-01-27 19:15:06.638 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:07 compute-0 nova_compute[185480]: 2026-01-27 19:15:07.751 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:11 compute-0 nova_compute[185480]: 2026-01-27 19:15:11.641 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:12 compute-0 nova_compute[185480]: 2026-01-27 19:15:12.754 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:15 compute-0 podman[242815]: 2026-01-27 19:15:15.315170769 +0000 UTC m=+0.076434447 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 27 19:15:15 compute-0 podman[242814]: 2026-01-27 19:15:15.347713763 +0000 UTC m=+0.100319200 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:15:16 compute-0 nova_compute[185480]: 2026-01-27 19:15:16.644 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:17 compute-0 nova_compute[185480]: 2026-01-27 19:15:17.757 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:18 compute-0 podman[242858]: 2026-01-27 19:15:18.304265024 +0000 UTC m=+0.075637947 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, managed_by=edpm_ansible, io.openshift.expose-services=, release-0.7.12=, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Jan 27 19:15:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:20.521 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:15:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:20.523 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:15:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:20.524 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:15:21 compute-0 nova_compute[185480]: 2026-01-27 19:15:21.648 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:22 compute-0 nova_compute[185480]: 2026-01-27 19:15:22.760 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:24 compute-0 podman[242879]: 2026-01-27 19:15:24.303841843 +0000 UTC m=+0.079941262 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-type=git, version=9.6, config_id=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.782 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.784 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.785 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.785 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.785 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.787 185484 INFO nova.compute.manager [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Terminating instance
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.788 185484 DEBUG nova.compute.manager [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:15:25 compute-0 kernel: tap1447625c-00 (unregistering): left promiscuous mode
Jan 27 19:15:25 compute-0 NetworkManager[56191]: <info>  [1769541325.8301] device (tap1447625c-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.839 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:25 compute-0 ovn_controller[97647]: 2026-01-27T19:15:25Z|00050|binding|INFO|Releasing lport 1447625c-00ab-407e-94d6-83dc67aba59c from this chassis (sb_readonly=0)
Jan 27 19:15:25 compute-0 ovn_controller[97647]: 2026-01-27T19:15:25Z|00051|binding|INFO|Setting lport 1447625c-00ab-407e-94d6-83dc67aba59c down in Southbound
Jan 27 19:15:25 compute-0 ovn_controller[97647]: 2026-01-27T19:15:25Z|00052|binding|INFO|Removing iface tap1447625c-00 ovn-installed in OVS
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.842 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:25 compute-0 nova_compute[185480]: 2026-01-27 19:15:25.859 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:25 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 27 19:15:25 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 5min 24.206s CPU time.
Jan 27 19:15:25 compute-0 systemd-machined[156762]: Machine qemu-2-instance-00000002 terminated.
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.065 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:f3:b6 192.168.0.54'], port_security=['fa:16:3e:6c:f3:b6 192.168.0.54'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-z6txynvcwoqi-27d46klag4r3-odvujbh3kmzr-port-r2mhxscfwr32', 'neutron:cidrs': '192.168.0.54/24', 'neutron:device_id': '41f46cfd-06bf-4ef6-85a3-cc6e8629637e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-z6txynvcwoqi-27d46klag4r3-odvujbh3kmzr-port-r2mhxscfwr32', 'neutron:project_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a99badb-bb64-4e2f-95a8-78f317eb6676', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33ebbec8-56f4-45ac-84a6-f80dd4a7c167, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=1447625c-00ab-407e-94d6-83dc67aba59c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.066 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 1447625c-00ab-407e-94d6-83dc67aba59c in datapath 4f32262d-dee8-406b-8a5a-09e95f48c8d5 unbound from our chassis
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.067 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.075 185484 INFO nova.virt.libvirt.driver [-] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Instance destroyed successfully.
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.075 185484 DEBUG nova.objects.instance [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'resources' on Instance uuid 41f46cfd-06bf-4ef6-85a3-cc6e8629637e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.084 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[cbca89b3-a718-49b0-a939-f619746ac6d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.117 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[020e1080-4d17-4281-aa5e-780e2120b351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.120 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[17c13343-735a-4cdc-b1ac-17cccc2c2f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.149 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[6803a281-8799-4a5a-ac7a-df7dc2a556a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.166 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[a7466d58-133b-463a-8869-2ccadf9c1502]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f32262d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:bd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383551, 'reachable_time': 41862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242935, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.179 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4c77e0-a244-4493-9848-23361483d352]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383562, 'tstamp': 383562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242936, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383566, 'tstamp': 383566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242936, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.181 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f32262d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.183 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.189 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.191 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f32262d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.191 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.192 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f32262d-d0, col_values=(('external_ids', {'iface-id': '5950ebf0-6d13-4405-b07d-fec152665bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:15:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:26.192 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.650 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.763 185484 DEBUG nova.virt.libvirt.vif [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:06:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-27d46klag4r3-odvujbh3kmzr-vnf-fycoeqo54y5d',id=2,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T19:06:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-fernbz5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T19:06:29Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODk1MjQwNjY4NTM3NjY2MTIyMz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 19:15:26 compute-0 nova_compute[185480]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODk1MjQwNjY4NTM3NjY2MTIyMz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg5NTI0MDY2ODUzNzY2NjEyMjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04OTUyNDA2Njg1Mzc2NjYxMjIzPT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=41f46cfd-06bf-4ef6-85a3-cc6e8629637e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.764 185484 DEBUG nova.network.os_vif_util [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.765 185484 DEBUG nova.network.os_vif_util [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:f3:b6,bridge_name='br-int',has_traffic_filtering=True,id=1447625c-00ab-407e-94d6-83dc67aba59c,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1447625c-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.766 185484 DEBUG os_vif [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:f3:b6,bridge_name='br-int',has_traffic_filtering=True,id=1447625c-00ab-407e-94d6-83dc67aba59c,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1447625c-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.768 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.768 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1447625c-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.770 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.771 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.774 185484 INFO os_vif [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:f3:b6,bridge_name='br-int',has_traffic_filtering=True,id=1447625c-00ab-407e-94d6-83dc67aba59c,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1447625c-00')
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.774 185484 INFO nova.virt.libvirt.driver [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Deleting instance files /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e_del
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.775 185484 INFO nova.virt.libvirt.driver [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Deletion of /var/lib/nova/instances/41f46cfd-06bf-4ef6-85a3-cc6e8629637e_del complete
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.893 185484 DEBUG nova.virt.libvirt.host [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.894 185484 INFO nova.virt.libvirt.host [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] UEFI support detected
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.897 185484 INFO nova.compute.manager [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Took 1.11 seconds to destroy the instance on the hypervisor.
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.897 185484 DEBUG oslo.service.loopingcall [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.898 185484 DEBUG nova.compute.manager [-] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:15:26 compute-0 nova_compute[185480]: 2026-01-27 19:15:26.898 185484 DEBUG nova.network.neutron [-] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:15:26 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:15:26.763 185484 DEBUG nova.virt.libvirt.vif [None req-35ebafe5-9e [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:15:27 compute-0 nova_compute[185480]: 2026-01-27 19:15:27.131 185484 DEBUG nova.compute.manager [req-396ec5eb-9202-4e93-ae61-19c75c40fcce req-807bdd0e-3df0-47bc-9043-9cf12836ab82 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received event network-vif-unplugged-1447625c-00ab-407e-94d6-83dc67aba59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:15:27 compute-0 nova_compute[185480]: 2026-01-27 19:15:27.132 185484 DEBUG oslo_concurrency.lockutils [req-396ec5eb-9202-4e93-ae61-19c75c40fcce req-807bdd0e-3df0-47bc-9043-9cf12836ab82 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:15:27 compute-0 nova_compute[185480]: 2026-01-27 19:15:27.132 185484 DEBUG oslo_concurrency.lockutils [req-396ec5eb-9202-4e93-ae61-19c75c40fcce req-807bdd0e-3df0-47bc-9043-9cf12836ab82 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:15:27 compute-0 nova_compute[185480]: 2026-01-27 19:15:27.132 185484 DEBUG oslo_concurrency.lockutils [req-396ec5eb-9202-4e93-ae61-19c75c40fcce req-807bdd0e-3df0-47bc-9043-9cf12836ab82 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:15:27 compute-0 nova_compute[185480]: 2026-01-27 19:15:27.133 185484 DEBUG nova.compute.manager [req-396ec5eb-9202-4e93-ae61-19c75c40fcce req-807bdd0e-3df0-47bc-9043-9cf12836ab82 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] No waiting events found dispatching network-vif-unplugged-1447625c-00ab-407e-94d6-83dc67aba59c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:15:27 compute-0 nova_compute[185480]: 2026-01-27 19:15:27.133 185484 DEBUG nova.compute.manager [req-396ec5eb-9202-4e93-ae61-19c75c40fcce req-807bdd0e-3df0-47bc-9043-9cf12836ab82 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received event network-vif-unplugged-1447625c-00ab-407e-94d6-83dc67aba59c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 19:15:27 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:27.240 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:15:27 compute-0 nova_compute[185480]: 2026-01-27 19:15:27.240 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:27 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:27.243 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:15:27 compute-0 nova_compute[185480]: 2026-01-27 19:15:27.762 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:28 compute-0 nova_compute[185480]: 2026-01-27 19:15:28.515 185484 DEBUG nova.compute.manager [req-84aa18a6-fa61-4997-8433-0b3e646299cc req-d8e763f0-73ad-42b0-859f-f43ae28d820c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received event network-changed-1447625c-00ab-407e-94d6-83dc67aba59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:15:28 compute-0 nova_compute[185480]: 2026-01-27 19:15:28.516 185484 DEBUG nova.compute.manager [req-84aa18a6-fa61-4997-8433-0b3e646299cc req-d8e763f0-73ad-42b0-859f-f43ae28d820c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Refreshing instance network info cache due to event network-changed-1447625c-00ab-407e-94d6-83dc67aba59c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:15:28 compute-0 nova_compute[185480]: 2026-01-27 19:15:28.516 185484 DEBUG oslo_concurrency.lockutils [req-84aa18a6-fa61-4997-8433-0b3e646299cc req-d8e763f0-73ad-42b0-859f-f43ae28d820c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:15:28 compute-0 nova_compute[185480]: 2026-01-27 19:15:28.516 185484 DEBUG oslo_concurrency.lockutils [req-84aa18a6-fa61-4997-8433-0b3e646299cc req-d8e763f0-73ad-42b0-859f-f43ae28d820c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:15:28 compute-0 nova_compute[185480]: 2026-01-27 19:15:28.517 185484 DEBUG nova.network.neutron [req-84aa18a6-fa61-4997-8433-0b3e646299cc req-d8e763f0-73ad-42b0-859f-f43ae28d820c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Refreshing network info cache for port 1447625c-00ab-407e-94d6-83dc67aba59c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:15:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:15:29.245 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:15:29 compute-0 podman[201378]: time="2026-01-27T19:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:15:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:15:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 27 19:15:30 compute-0 nova_compute[185480]: 2026-01-27 19:15:30.145 185484 DEBUG nova.compute.manager [req-39d97465-644e-4aaa-9189-d49f4f0d2b8e req-a76a009b-6871-4656-a6d0-8910db4f4f88 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received event network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:15:30 compute-0 nova_compute[185480]: 2026-01-27 19:15:30.146 185484 DEBUG oslo_concurrency.lockutils [req-39d97465-644e-4aaa-9189-d49f4f0d2b8e req-a76a009b-6871-4656-a6d0-8910db4f4f88 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:15:30 compute-0 nova_compute[185480]: 2026-01-27 19:15:30.146 185484 DEBUG oslo_concurrency.lockutils [req-39d97465-644e-4aaa-9189-d49f4f0d2b8e req-a76a009b-6871-4656-a6d0-8910db4f4f88 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:15:30 compute-0 nova_compute[185480]: 2026-01-27 19:15:30.146 185484 DEBUG oslo_concurrency.lockutils [req-39d97465-644e-4aaa-9189-d49f4f0d2b8e req-a76a009b-6871-4656-a6d0-8910db4f4f88 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:15:30 compute-0 nova_compute[185480]: 2026-01-27 19:15:30.147 185484 DEBUG nova.compute.manager [req-39d97465-644e-4aaa-9189-d49f4f0d2b8e req-a76a009b-6871-4656-a6d0-8910db4f4f88 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] No waiting events found dispatching network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:15:30 compute-0 nova_compute[185480]: 2026-01-27 19:15:30.147 185484 WARNING nova.compute.manager [req-39d97465-644e-4aaa-9189-d49f4f0d2b8e req-a76a009b-6871-4656-a6d0-8910db4f4f88 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Received unexpected event network-vif-plugged-1447625c-00ab-407e-94d6-83dc67aba59c for instance with vm_state active and task_state deleting.
Jan 27 19:15:31 compute-0 podman[242938]: 2026-01-27 19:15:31.320960436 +0000 UTC m=+0.083865578 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 27 19:15:31 compute-0 openstack_network_exporter[204477]: ERROR   19:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:15:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:15:31 compute-0 openstack_network_exporter[204477]: ERROR   19:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:15:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.581 185484 DEBUG nova.network.neutron [-] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.608 185484 INFO nova.compute.manager [-] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Took 4.71 seconds to deallocate network for instance.
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.663 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.664 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.685 185484 DEBUG nova.network.neutron [req-84aa18a6-fa61-4997-8433-0b3e646299cc req-d8e763f0-73ad-42b0-859f-f43ae28d820c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updated VIF entry in instance network info cache for port 1447625c-00ab-407e-94d6-83dc67aba59c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.687 185484 DEBUG nova.network.neutron [req-84aa18a6-fa61-4997-8433-0b3e646299cc req-d8e763f0-73ad-42b0-859f-f43ae28d820c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Updating instance_info_cache with network_info: [{"id": "1447625c-00ab-407e-94d6-83dc67aba59c", "address": "fa:16:3e:6c:f3:b6", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.54", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1447625c-00", "ovs_interfaceid": "1447625c-00ab-407e-94d6-83dc67aba59c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.712 185484 DEBUG oslo_concurrency.lockutils [req-84aa18a6-fa61-4997-8433-0b3e646299cc req-d8e763f0-73ad-42b0-859f-f43ae28d820c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-41f46cfd-06bf-4ef6-85a3-cc6e8629637e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.771 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.794 185484 DEBUG nova.compute.provider_tree [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.807 185484 DEBUG nova.scheduler.client.report [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:15:31 compute-0 nova_compute[185480]: 2026-01-27 19:15:31.981 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:15:32 compute-0 nova_compute[185480]: 2026-01-27 19:15:32.044 185484 INFO nova.scheduler.client.report [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Deleted allocations for instance 41f46cfd-06bf-4ef6-85a3-cc6e8629637e
Jan 27 19:15:32 compute-0 nova_compute[185480]: 2026-01-27 19:15:32.265 185484 DEBUG oslo_concurrency.lockutils [None req-35ebafe5-9e88-4a61-a011-3c81310c15ae 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "41f46cfd-06bf-4ef6-85a3-cc6e8629637e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:15:32 compute-0 nova_compute[185480]: 2026-01-27 19:15:32.765 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:34 compute-0 podman[242957]: 2026-01-27 19:15:34.304336696 +0000 UTC m=+0.070716756 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:15:34 compute-0 podman[242955]: 2026-01-27 19:15:34.311831629 +0000 UTC m=+0.084506043 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:15:34 compute-0 podman[242956]: 2026-01-27 19:15:34.372044958 +0000 UTC m=+0.136201624 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 27 19:15:36 compute-0 nova_compute[185480]: 2026-01-27 19:15:36.774 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:37 compute-0 nova_compute[185480]: 2026-01-27 19:15:37.767 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.071 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769541326.0697765, 41f46cfd-06bf-4ef6-85a3-cc6e8629637e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.072 185484 INFO nova.compute.manager [-] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] VM Stopped (Lifecycle Event)
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.089 185484 DEBUG nova.compute.manager [None req-39155478-3ea1-45f2-ac0a-02b8f5a66d2f - - - - - -] [instance: 41f46cfd-06bf-4ef6-85a3-cc6e8629637e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.777 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.850 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.851 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:15:41 compute-0 nova_compute[185480]: 2026-01-27 19:15:41.852 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:15:42 compute-0 nova_compute[185480]: 2026-01-27 19:15:42.771 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:43 compute-0 nova_compute[185480]: 2026-01-27 19:15:43.017 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updating instance_info_cache with network_info: [{"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:15:43 compute-0 nova_compute[185480]: 2026-01-27 19:15:43.038 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:15:43 compute-0 nova_compute[185480]: 2026-01-27 19:15:43.039 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:15:44 compute-0 nova_compute[185480]: 2026-01-27 19:15:44.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:44 compute-0 nova_compute[185480]: 2026-01-27 19:15:44.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 19:15:44 compute-0 nova_compute[185480]: 2026-01-27 19:15:44.530 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 19:15:46 compute-0 podman[243020]: 2026-01-27 19:15:46.288972866 +0000 UTC m=+0.068244217 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:15:46 compute-0 podman[243021]: 2026-01-27 19:15:46.342908552 +0000 UTC m=+0.116027523 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 19:15:46 compute-0 nova_compute[185480]: 2026-01-27 19:15:46.530 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:46 compute-0 nova_compute[185480]: 2026-01-27 19:15:46.781 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:47 compute-0 nova_compute[185480]: 2026-01-27 19:15:47.774 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:49 compute-0 podman[243061]: 2026-01-27 19:15:49.29745492 +0000 UTC m=+0.075939664 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.542 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.543 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.543 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.544 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.651 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.711 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.713 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.788 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.789 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.845 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.846 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.908 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.915 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.976 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:49 compute-0 nova_compute[185480]: 2026-01-27 19:15:49.977 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.042 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.043 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.109 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.110 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.184 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.190 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.256 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.257 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.318 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.319 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.377 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.378 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.459 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.819 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.821 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4750MB free_disk=72.37727355957031GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.821 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:15:50 compute-0 nova_compute[185480]: 2026-01-27 19:15:50.822 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.025 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.025 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.026 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.026 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.026 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.099 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing inventories for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.172 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating ProviderTree inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.173 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.187 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing aggregate associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.211 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing trait associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, traits: HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.285 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.300 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.316 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.317 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:15:51 compute-0 nova_compute[185480]: 2026-01-27 19:15:51.783 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:52 compute-0 nova_compute[185480]: 2026-01-27 19:15:52.313 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:52 compute-0 nova_compute[185480]: 2026-01-27 19:15:52.313 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:52 compute-0 nova_compute[185480]: 2026-01-27 19:15:52.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:52 compute-0 nova_compute[185480]: 2026-01-27 19:15:52.777 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:53 compute-0 nova_compute[185480]: 2026-01-27 19:15:53.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:53 compute-0 nova_compute[185480]: 2026-01-27 19:15:53.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:53 compute-0 nova_compute[185480]: 2026-01-27 19:15:53.518 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:55 compute-0 podman[243121]: 2026-01-27 19:15:55.3002118 +0000 UTC m=+0.075709848 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, config_id=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Jan 27 19:15:55 compute-0 nova_compute[185480]: 2026-01-27 19:15:55.637 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:55 compute-0 nova_compute[185480]: 2026-01-27 19:15:55.658 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:15:55 compute-0 nova_compute[185480]: 2026-01-27 19:15:55.659 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 19:15:56 compute-0 nova_compute[185480]: 2026-01-27 19:15:56.786 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:57 compute-0 nova_compute[185480]: 2026-01-27 19:15:57.780 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:15:59 compute-0 podman[201378]: time="2026-01-27T19:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:15:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:15:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.183 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.212 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Triggering sync for uuid b6b280bb-d859-43f3-836a-f93d00510948 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.213 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Triggering sync for uuid 04920b61-96ec-47fc-9d6d-dfdb491e0e77 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.213 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Triggering sync for uuid 92ff85a4-5620-4dd0-8930-62b7f561edf6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.214 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.214 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "b6b280bb-d859-43f3-836a-f93d00510948" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.215 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.216 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.216 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.217 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.275 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.312 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.317 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "b6b280bb-d859-43f3-836a-f93d00510948" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:16:01 compute-0 openstack_network_exporter[204477]: ERROR   19:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:16:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:16:01 compute-0 openstack_network_exporter[204477]: ERROR   19:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:16:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:16:01 compute-0 nova_compute[185480]: 2026-01-27 19:16:01.789 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:02 compute-0 podman[243140]: 2026-01-27 19:16:02.318917016 +0000 UTC m=+0.097561712 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 27 19:16:02 compute-0 nova_compute[185480]: 2026-01-27 19:16:02.782 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:04 compute-0 ovn_controller[97647]: 2026-01-27T19:16:04Z|00053|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 27 19:16:05 compute-0 podman[243162]: 2026-01-27 19:16:05.318378137 +0000 UTC m=+0.082265699 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 19:16:05 compute-0 podman[243164]: 2026-01-27 19:16:05.340763834 +0000 UTC m=+0.100273139 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:16:05 compute-0 podman[243163]: 2026-01-27 19:16:05.383540287 +0000 UTC m=+0.145100052 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:16:06 compute-0 nova_compute[185480]: 2026-01-27 19:16:06.791 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:07 compute-0 nova_compute[185480]: 2026-01-27 19:16:07.785 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:11 compute-0 nova_compute[185480]: 2026-01-27 19:16:11.793 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:12 compute-0 nova_compute[185480]: 2026-01-27 19:16:12.787 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:16 compute-0 nova_compute[185480]: 2026-01-27 19:16:16.796 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:17 compute-0 podman[243224]: 2026-01-27 19:16:17.30134798 +0000 UTC m=+0.078866115 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:16:17 compute-0 podman[243225]: 2026-01-27 19:16:17.330794311 +0000 UTC m=+0.097465304 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:16:17 compute-0 nova_compute[185480]: 2026-01-27 19:16:17.790 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:20 compute-0 podman[243266]: 2026-01-27 19:16:20.326354057 +0000 UTC m=+0.097354202 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, vcs-type=git, name=ubi9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_id=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 19:16:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:16:20.523 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:16:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:16:20.523 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:16:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:16:20.524 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:16:21 compute-0 nova_compute[185480]: 2026-01-27 19:16:21.799 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:22 compute-0 nova_compute[185480]: 2026-01-27 19:16:22.793 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:26 compute-0 podman[243285]: 2026-01-27 19:16:26.315658715 +0000 UTC m=+0.097595887 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Jan 27 19:16:26 compute-0 nova_compute[185480]: 2026-01-27 19:16:26.801 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:27 compute-0 nova_compute[185480]: 2026-01-27 19:16:27.795 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:29 compute-0 podman[201378]: time="2026-01-27T19:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:16:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:16:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 27 19:16:31 compute-0 openstack_network_exporter[204477]: ERROR   19:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:16:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:16:31 compute-0 openstack_network_exporter[204477]: ERROR   19:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:16:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:16:31 compute-0 nova_compute[185480]: 2026-01-27 19:16:31.803 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.097 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.098 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.172 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.173 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d25406b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.174 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'name': 'test_0', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.178 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '92ff85a4-5620-4dd0-8930-62b7f561edf6', 'name': 'vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.181 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '04920b61-96ec-47fc-9d6d-dfdb491e0e77', 'name': 'vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.182 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:16:32.182610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.208 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 21766144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.208 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.209 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.232 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.233 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.233 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.255 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.256 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.256 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.257 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:16:32.257518) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.278 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.300 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/memory.usage volume: 49.109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.321 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/memory.usage volume: 48.95703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.322 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.322 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.323 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.323 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:16:32.322856) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:16:32.324552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.328 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.332 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.335 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:16:32.336432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.402 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.403 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.403 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.473 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.474 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.474 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.546 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.546 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.547 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.548 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.548 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.549 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:16:32.548962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.549 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.550 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.550 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.551 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.551 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.551 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.551 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.552 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.552 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.552 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:16:32.551415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.553 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.554 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:16:32.554124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.554 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.555 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.555 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.555 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.556 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.556 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.556 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.557 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.558 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.559 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/cpu volume: 40370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.559 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/cpu volume: 35370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.560 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/cpu volume: 38130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:16:32.558920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.561 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:16:32.561273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.561 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.562 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.562 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.562 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.563 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.563 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.563 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.564 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.565 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.566 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.566 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:16:32.565480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.567 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:16:32.568407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.569 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 713872381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.570 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 102610265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.570 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 79720785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.570 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 724384888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.571 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 110122219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.571 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 191687644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.572 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 623753590 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.572 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 105958430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.573 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.read.latency volume: 83308410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.576 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.577 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.577 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:16:32.576200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.579 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:16:32.579176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.579 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.580 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:16:32.581516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.581 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.582 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.582 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.582 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.582 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.583 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.583 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.583 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.583 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.585 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.585 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.585 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:16:32.585112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.586 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.586 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 2565149834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.587 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 13830550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.587 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.587 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 2519550782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.587 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 10920156 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.588 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.588 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 1287563622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:16:32.586586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.588 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 10814945 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.588 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.590 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:16:32.589933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.590 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.590 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.591 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.591 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.591 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.592 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:16:32.591489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.592 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.592 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.593 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.593 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.593 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.593 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:16:32.594757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.595 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.595 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.596 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.596 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.596 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:16:32.595965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:16:32.597553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.598 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.598 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:16:32.598742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.599 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.599 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.599 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.599 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.599 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.600 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.600 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.600 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.601 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.601 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes volume: 2172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:16:32.601646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.602 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.602 14 DEBUG ceilometer.compute.pollsters [-] 04920b61-96ec-47fc-9d6d-dfdb491e0e77/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.602 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:16:32.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:16:32 compute-0 nova_compute[185480]: 2026-01-27 19:16:32.797 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:33 compute-0 podman[243306]: 2026-01-27 19:16:33.34222253 +0000 UTC m=+0.115044559 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0)
Jan 27 19:16:33 compute-0 sshd-session[243325]: Invalid user sol from 45.148.10.240 port 35584
Jan 27 19:16:33 compute-0 sshd-session[243325]: Connection closed by invalid user sol 45.148.10.240 port 35584 [preauth]
Jan 27 19:16:36 compute-0 podman[243327]: 2026-01-27 19:16:36.31062728 +0000 UTC m=+0.073804633 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:16:36 compute-0 podman[243329]: 2026-01-27 19:16:36.313247174 +0000 UTC m=+0.078904797 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 19:16:36 compute-0 podman[243328]: 2026-01-27 19:16:36.360031433 +0000 UTC m=+0.123037192 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 19:16:36 compute-0 nova_compute[185480]: 2026-01-27 19:16:36.806 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:37 compute-0 nova_compute[185480]: 2026-01-27 19:16:37.799 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:41 compute-0 nova_compute[185480]: 2026-01-27 19:16:41.550 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:41 compute-0 nova_compute[185480]: 2026-01-27 19:16:41.551 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:16:41 compute-0 nova_compute[185480]: 2026-01-27 19:16:41.809 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:41 compute-0 nova_compute[185480]: 2026-01-27 19:16:41.839 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:16:41 compute-0 nova_compute[185480]: 2026-01-27 19:16:41.839 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:16:41 compute-0 nova_compute[185480]: 2026-01-27 19:16:41.840 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:16:42 compute-0 nova_compute[185480]: 2026-01-27 19:16:42.802 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:43 compute-0 nova_compute[185480]: 2026-01-27 19:16:43.286 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updating instance_info_cache with network_info: [{"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:16:43 compute-0 nova_compute[185480]: 2026-01-27 19:16:43.308 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:16:43 compute-0 nova_compute[185480]: 2026-01-27 19:16:43.309 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:16:46 compute-0 nova_compute[185480]: 2026-01-27 19:16:46.812 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:47 compute-0 nova_compute[185480]: 2026-01-27 19:16:47.805 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:48 compute-0 podman[243394]: 2026-01-27 19:16:48.30515696 +0000 UTC m=+0.080979677 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 19:16:48 compute-0 podman[243393]: 2026-01-27 19:16:48.323924662 +0000 UTC m=+0.103661663 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:16:48 compute-0 nova_compute[185480]: 2026-01-27 19:16:48.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:50 compute-0 nova_compute[185480]: 2026-01-27 19:16:50.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:50 compute-0 nova_compute[185480]: 2026-01-27 19:16:50.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:16:51 compute-0 podman[243439]: 2026-01-27 19:16:51.300360108 +0000 UTC m=+0.075986397 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-type=git, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=kepler, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.511 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.540 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.541 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.541 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.541 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.632 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.699 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.701 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.781 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.782 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.815 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.851 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.852 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.915 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.924 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.982 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:51 compute-0 nova_compute[185480]: 2026-01-27 19:16:51.983 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.048 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.049 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.117 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.118 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.184 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.198 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.290 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.290 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.344 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.345 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.405 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.406 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.488 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.807 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.896 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.898 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4739MB free_disk=72.3773307800293GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.898 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.898 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.974 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.975 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.975 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.976 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:16:52 compute-0 nova_compute[185480]: 2026-01-27 19:16:52.976 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:16:53 compute-0 nova_compute[185480]: 2026-01-27 19:16:53.045 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:16:53 compute-0 nova_compute[185480]: 2026-01-27 19:16:53.060 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:16:53 compute-0 nova_compute[185480]: 2026-01-27 19:16:53.063 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:16:53 compute-0 nova_compute[185480]: 2026-01-27 19:16:53.064 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:16:55 compute-0 nova_compute[185480]: 2026-01-27 19:16:55.065 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:55 compute-0 nova_compute[185480]: 2026-01-27 19:16:55.065 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:55 compute-0 nova_compute[185480]: 2026-01-27 19:16:55.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:16:56 compute-0 nova_compute[185480]: 2026-01-27 19:16:56.819 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:57 compute-0 podman[243495]: 2026-01-27 19:16:57.311514314 +0000 UTC m=+0.092635057 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, config_id=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=)
Jan 27 19:16:57 compute-0 nova_compute[185480]: 2026-01-27 19:16:57.810 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:16:59 compute-0 podman[201378]: time="2026-01-27T19:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:16:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:16:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 27 19:17:01 compute-0 openstack_network_exporter[204477]: ERROR   19:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:17:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:17:01 compute-0 openstack_network_exporter[204477]: ERROR   19:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:17:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:17:01 compute-0 nova_compute[185480]: 2026-01-27 19:17:01.821 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:02 compute-0 nova_compute[185480]: 2026-01-27 19:17:02.812 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:04 compute-0 podman[243517]: 2026-01-27 19:17:04.317065271 +0000 UTC m=+0.091161292 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 19:17:06 compute-0 nova_compute[185480]: 2026-01-27 19:17:06.824 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:07 compute-0 podman[243537]: 2026-01-27 19:17:07.332383004 +0000 UTC m=+0.084418439 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:17:07 compute-0 podman[243535]: 2026-01-27 19:17:07.335967161 +0000 UTC m=+0.092131355 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:17:07 compute-0 podman[243536]: 2026-01-27 19:17:07.372479643 +0000 UTC m=+0.134582031 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 19:17:07 compute-0 nova_compute[185480]: 2026-01-27 19:17:07.815 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:11 compute-0 nova_compute[185480]: 2026-01-27 19:17:11.827 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:12 compute-0 nova_compute[185480]: 2026-01-27 19:17:12.818 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:16 compute-0 nova_compute[185480]: 2026-01-27 19:17:16.833 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:17 compute-0 nova_compute[185480]: 2026-01-27 19:17:17.820 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:19 compute-0 podman[243602]: 2026-01-27 19:17:19.312218638 +0000 UTC m=+0.083837915 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 19:17:19 compute-0 podman[243601]: 2026-01-27 19:17:19.327344494 +0000 UTC m=+0.101827230 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:17:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:20.525 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:17:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:20.526 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:17:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:20.527 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:17:21 compute-0 nova_compute[185480]: 2026-01-27 19:17:21.838 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:22 compute-0 podman[243644]: 2026-01-27 19:17:22.324468416 +0000 UTC m=+0.093075129 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, container_name=kepler)
Jan 27 19:17:22 compute-0 nova_compute[185480]: 2026-01-27 19:17:22.856 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.193 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.194 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.195 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.195 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.195 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.197 185484 INFO nova.compute.manager [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Terminating instance
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.198 185484 DEBUG nova.compute.manager [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:17:23 compute-0 kernel: tapa9d60848-08 (unregistering): left promiscuous mode
Jan 27 19:17:23 compute-0 NetworkManager[56191]: <info>  [1769541443.2424] device (tapa9d60848-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 19:17:23 compute-0 ovn_controller[97647]: 2026-01-27T19:17:23Z|00054|binding|INFO|Releasing lport a9d60848-08b8-4d9d-9aad-b656c10474d8 from this chassis (sb_readonly=0)
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.252 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:23 compute-0 ovn_controller[97647]: 2026-01-27T19:17:23Z|00055|binding|INFO|Setting lport a9d60848-08b8-4d9d-9aad-b656c10474d8 down in Southbound
Jan 27 19:17:23 compute-0 ovn_controller[97647]: 2026-01-27T19:17:23Z|00056|binding|INFO|Removing iface tapa9d60848-08 ovn-installed in OVS
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.255 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.265 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.267 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:40:8d 192.168.0.63'], port_security=['fa:16:3e:42:40:8d 192.168.0.63'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-z6txynvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-port-olblivelu66d', 'neutron:cidrs': '192.168.0.63/24', 'neutron:device_id': '04920b61-96ec-47fc-9d6d-dfdb491e0e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-z6txynvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-port-olblivelu66d', 'neutron:project_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a99badb-bb64-4e2f-95a8-78f317eb6676', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33ebbec8-56f4-45ac-84a6-f80dd4a7c167, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=a9d60848-08b8-4d9d-9aad-b656c10474d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.268 106898 INFO neutron.agent.ovn.metadata.agent [-] Port a9d60848-08b8-4d9d-9aad-b656c10474d8 in datapath 4f32262d-dee8-406b-8a5a-09e95f48c8d5 unbound from our chassis
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.270 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.286 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2b6834-d9a1-42cd-9ed6-0a59cc3cfa57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:17:23 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 27 19:17:23 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 30.319s CPU time.
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.316 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[6f67d649-c580-4df1-8db9-ee4ef26e6cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:17:23 compute-0 systemd-machined[156762]: Machine qemu-3-instance-00000003 terminated.
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.320 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[25593c2e-e215-4613-b27b-ec83cc57d389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.347 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9aa771-ea32-4ca3-b8fb-300067bf962b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.363 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca97a81-d902-4a53-bdcd-b6b294e34592]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f32262d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:bd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383551, 'reachable_time': 41862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243676, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.380 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[b28d8a4f-6219-4a8a-8d4b-a37afbd1670f]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383562, 'tstamp': 383562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243677, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383566, 'tstamp': 383566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243677, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.381 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f32262d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.383 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.388 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.388 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f32262d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.389 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.389 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f32262d-d0, col_values=(('external_ids', {'iface-id': '5950ebf0-6d13-4405-b07d-fec152665bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:17:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:23.389 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.490 185484 INFO nova.virt.libvirt.driver [-] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Instance destroyed successfully.
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.491 185484 DEBUG nova.objects.instance [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'resources' on Instance uuid 04920b61-96ec-47fc-9d6d-dfdb491e0e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.514 185484 DEBUG nova.virt.libvirt.vif [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:09:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-rmq4sc4wr2r7-rs7nawgfxai4-vnf-y7mnafpz4cn6',id=3,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T19:09:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-jdpfr1nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T19:09:36Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDQzOTYwNDQ5MDUzNzkwNjA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 19:17:23 compute-0 nova_compute[185480]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDQzOTYwNDQ5MDUzNzkwNjA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA0Mzk2MDQ0OTA1Mzc5MDYwODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNDM5NjA0NDkwNTM3OTA2MDg4PT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=04920b61-96ec-47fc-9d6d-dfdb491e0e77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.514 185484 DEBUG nova.network.os_vif_util [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.515 185484 DEBUG nova.network.os_vif_util [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:40:8d,bridge_name='br-int',has_traffic_filtering=True,id=a9d60848-08b8-4d9d-9aad-b656c10474d8,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9d60848-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.516 185484 DEBUG os_vif [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:40:8d,bridge_name='br-int',has_traffic_filtering=True,id=a9d60848-08b8-4d9d-9aad-b656c10474d8,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9d60848-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.518 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.518 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9d60848-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.523 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.525 185484 INFO os_vif [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:40:8d,bridge_name='br-int',has_traffic_filtering=True,id=a9d60848-08b8-4d9d-9aad-b656c10474d8,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9d60848-08')
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.527 185484 INFO nova.virt.libvirt.driver [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Deleting instance files /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77_del
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.528 185484 INFO nova.virt.libvirt.driver [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Deletion of /var/lib/nova/instances/04920b61-96ec-47fc-9d6d-dfdb491e0e77_del complete
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.602 185484 INFO nova.compute.manager [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.603 185484 DEBUG oslo.service.loopingcall [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.603 185484 DEBUG nova.compute.manager [-] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:17:23 compute-0 nova_compute[185480]: 2026-01-27 19:17:23.604 185484 DEBUG nova.network.neutron [-] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:17:23 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:17:23.514 185484 DEBUG nova.virt.libvirt.vif [None req-d3d4ab0b-03 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.207 185484 DEBUG nova.compute.manager [req-cb507503-4f29-4dae-805e-e6900879501d req-5e241ea7-73c7-43b2-8eca-b778c668c4bd bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received event network-vif-unplugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.208 185484 DEBUG oslo_concurrency.lockutils [req-cb507503-4f29-4dae-805e-e6900879501d req-5e241ea7-73c7-43b2-8eca-b778c668c4bd bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.208 185484 DEBUG oslo_concurrency.lockutils [req-cb507503-4f29-4dae-805e-e6900879501d req-5e241ea7-73c7-43b2-8eca-b778c668c4bd bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.208 185484 DEBUG oslo_concurrency.lockutils [req-cb507503-4f29-4dae-805e-e6900879501d req-5e241ea7-73c7-43b2-8eca-b778c668c4bd bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.208 185484 DEBUG nova.compute.manager [req-cb507503-4f29-4dae-805e-e6900879501d req-5e241ea7-73c7-43b2-8eca-b778c668c4bd bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] No waiting events found dispatching network-vif-unplugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.209 185484 DEBUG nova.compute.manager [req-cb507503-4f29-4dae-805e-e6900879501d req-5e241ea7-73c7-43b2-8eca-b778c668c4bd bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received event network-vif-unplugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 19:17:25 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:25.623 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:17:25 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:25.624 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.626 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.746 185484 DEBUG nova.compute.manager [req-156a15a9-7609-4216-932b-2971193446ac req-7497261d-aaad-433e-a416-4d6e74acd686 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received event network-changed-a9d60848-08b8-4d9d-9aad-b656c10474d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.746 185484 DEBUG nova.compute.manager [req-156a15a9-7609-4216-932b-2971193446ac req-7497261d-aaad-433e-a416-4d6e74acd686 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Refreshing instance network info cache due to event network-changed-a9d60848-08b8-4d9d-9aad-b656c10474d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.746 185484 DEBUG oslo_concurrency.lockutils [req-156a15a9-7609-4216-932b-2971193446ac req-7497261d-aaad-433e-a416-4d6e74acd686 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.747 185484 DEBUG oslo_concurrency.lockutils [req-156a15a9-7609-4216-932b-2971193446ac req-7497261d-aaad-433e-a416-4d6e74acd686 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:17:25 compute-0 nova_compute[185480]: 2026-01-27 19:17:25.747 185484 DEBUG nova.network.neutron [req-156a15a9-7609-4216-932b-2971193446ac req-7497261d-aaad-433e-a416-4d6e74acd686 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Refreshing network info cache for port a9d60848-08b8-4d9d-9aad-b656c10474d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.071 185484 DEBUG nova.network.neutron [-] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.106 185484 INFO nova.compute.manager [-] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Took 2.50 seconds to deallocate network for instance.
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.158 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.158 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.278 185484 DEBUG nova.compute.provider_tree [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.312 185484 DEBUG nova.scheduler.client.report [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.355 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.389 185484 INFO nova.scheduler.client.report [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Deleted allocations for instance 04920b61-96ec-47fc-9d6d-dfdb491e0e77
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.459 185484 DEBUG oslo_concurrency.lockutils [None req-d3d4ab0b-03e4-47a1-8bc8-e9c9614cbbe7 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.891 185484 DEBUG nova.network.neutron [req-156a15a9-7609-4216-932b-2971193446ac req-7497261d-aaad-433e-a416-4d6e74acd686 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updated VIF entry in instance network info cache for port a9d60848-08b8-4d9d-9aad-b656c10474d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.892 185484 DEBUG nova.network.neutron [req-156a15a9-7609-4216-932b-2971193446ac req-7497261d-aaad-433e-a416-4d6e74acd686 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Updating instance_info_cache with network_info: [{"id": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "address": "fa:16:3e:42:40:8d", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9d60848-08", "ovs_interfaceid": "a9d60848-08b8-4d9d-9aad-b656c10474d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:17:26 compute-0 nova_compute[185480]: 2026-01-27 19:17:26.919 185484 DEBUG oslo_concurrency.lockutils [req-156a15a9-7609-4216-932b-2971193446ac req-7497261d-aaad-433e-a416-4d6e74acd686 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-04920b61-96ec-47fc-9d6d-dfdb491e0e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:17:27 compute-0 nova_compute[185480]: 2026-01-27 19:17:27.318 185484 DEBUG nova.compute.manager [req-3d55ef4f-5be0-402f-b658-0e5ac04e69c0 req-0ff89c49-e3d0-42a4-b553-2266fa5990d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received event network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:17:27 compute-0 nova_compute[185480]: 2026-01-27 19:17:27.319 185484 DEBUG oslo_concurrency.lockutils [req-3d55ef4f-5be0-402f-b658-0e5ac04e69c0 req-0ff89c49-e3d0-42a4-b553-2266fa5990d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:17:27 compute-0 nova_compute[185480]: 2026-01-27 19:17:27.319 185484 DEBUG oslo_concurrency.lockutils [req-3d55ef4f-5be0-402f-b658-0e5ac04e69c0 req-0ff89c49-e3d0-42a4-b553-2266fa5990d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:17:27 compute-0 nova_compute[185480]: 2026-01-27 19:17:27.320 185484 DEBUG oslo_concurrency.lockutils [req-3d55ef4f-5be0-402f-b658-0e5ac04e69c0 req-0ff89c49-e3d0-42a4-b553-2266fa5990d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "04920b61-96ec-47fc-9d6d-dfdb491e0e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:17:27 compute-0 nova_compute[185480]: 2026-01-27 19:17:27.320 185484 DEBUG nova.compute.manager [req-3d55ef4f-5be0-402f-b658-0e5ac04e69c0 req-0ff89c49-e3d0-42a4-b553-2266fa5990d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] No waiting events found dispatching network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:17:27 compute-0 nova_compute[185480]: 2026-01-27 19:17:27.320 185484 WARNING nova.compute.manager [req-3d55ef4f-5be0-402f-b658-0e5ac04e69c0 req-0ff89c49-e3d0-42a4-b553-2266fa5990d6 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Received unexpected event network-vif-plugged-a9d60848-08b8-4d9d-9aad-b656c10474d8 for instance with vm_state deleted and task_state None.
Jan 27 19:17:27 compute-0 nova_compute[185480]: 2026-01-27 19:17:27.860 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:28 compute-0 podman[243700]: 2026-01-27 19:17:28.349909846 +0000 UTC m=+0.112485156 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 19:17:28 compute-0 nova_compute[185480]: 2026-01-27 19:17:28.521 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:17:29.627 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:17:29 compute-0 podman[201378]: time="2026-01-27T19:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:17:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:17:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 27 19:17:31 compute-0 openstack_network_exporter[204477]: ERROR   19:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:17:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:17:31 compute-0 openstack_network_exporter[204477]: ERROR   19:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:17:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:17:32 compute-0 nova_compute[185480]: 2026-01-27 19:17:32.863 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:33 compute-0 nova_compute[185480]: 2026-01-27 19:17:33.524 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:35 compute-0 podman[243722]: 2026-01-27 19:17:35.290191308 +0000 UTC m=+0.073791213 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Jan 27 19:17:37 compute-0 nova_compute[185480]: 2026-01-27 19:17:37.865 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:38 compute-0 podman[243742]: 2026-01-27 19:17:38.291704598 +0000 UTC m=+0.069008188 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:17:38 compute-0 podman[243744]: 2026-01-27 19:17:38.328304722 +0000 UTC m=+0.087970746 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 19:17:38 compute-0 podman[243743]: 2026-01-27 19:17:38.401427777 +0000 UTC m=+0.160171728 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 27 19:17:38 compute-0 nova_compute[185480]: 2026-01-27 19:17:38.487 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769541443.4853127, 04920b61-96ec-47fc-9d6d-dfdb491e0e77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:17:38 compute-0 nova_compute[185480]: 2026-01-27 19:17:38.488 185484 INFO nova.compute.manager [-] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] VM Stopped (Lifecycle Event)
Jan 27 19:17:38 compute-0 nova_compute[185480]: 2026-01-27 19:17:38.520 185484 DEBUG nova.compute.manager [None req-798087a7-19d8-4ae5-ab0c-5a77d93c1f2c - - - - - -] [instance: 04920b61-96ec-47fc-9d6d-dfdb491e0e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:17:38 compute-0 nova_compute[185480]: 2026-01-27 19:17:38.526 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:42 compute-0 nova_compute[185480]: 2026-01-27 19:17:42.868 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:43 compute-0 nova_compute[185480]: 2026-01-27 19:17:43.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:43 compute-0 nova_compute[185480]: 2026-01-27 19:17:43.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:17:43 compute-0 nova_compute[185480]: 2026-01-27 19:17:43.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:17:43 compute-0 nova_compute[185480]: 2026-01-27 19:17:43.528 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:44 compute-0 nova_compute[185480]: 2026-01-27 19:17:44.060 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:17:44 compute-0 nova_compute[185480]: 2026-01-27 19:17:44.061 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:17:44 compute-0 nova_compute[185480]: 2026-01-27 19:17:44.061 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:17:44 compute-0 nova_compute[185480]: 2026-01-27 19:17:44.062 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:17:45 compute-0 nova_compute[185480]: 2026-01-27 19:17:45.431 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:17:45 compute-0 nova_compute[185480]: 2026-01-27 19:17:45.452 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:17:45 compute-0 nova_compute[185480]: 2026-01-27 19:17:45.453 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:17:47 compute-0 nova_compute[185480]: 2026-01-27 19:17:47.871 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:48 compute-0 nova_compute[185480]: 2026-01-27 19:17:48.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:48 compute-0 nova_compute[185480]: 2026-01-27 19:17:48.530 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:50 compute-0 podman[243809]: 2026-01-27 19:17:50.323562444 +0000 UTC m=+0.094263737 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 27 19:17:50 compute-0 podman[243808]: 2026-01-27 19:17:50.338500385 +0000 UTC m=+0.113050961 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.543 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.544 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.545 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.545 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.664 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.731 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.732 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.790 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.791 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.850 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.852 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.912 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.923 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.987 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:17:51 compute-0 nova_compute[185480]: 2026-01-27 19:17:51.989 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.081 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.083 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.167 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.169 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.250 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.644 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.646 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4904MB free_disk=72.39982223510742GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.646 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.647 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.751 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.751 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.752 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.752 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.822 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.835 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.856 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.856 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:17:52 compute-0 nova_compute[185480]: 2026-01-27 19:17:52.873 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:53 compute-0 podman[243875]: 2026-01-27 19:17:53.315892903 +0000 UTC m=+0.094383209 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Jan 27 19:17:53 compute-0 nova_compute[185480]: 2026-01-27 19:17:53.533 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:53 compute-0 nova_compute[185480]: 2026-01-27 19:17:53.857 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:53 compute-0 nova_compute[185480]: 2026-01-27 19:17:53.857 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:53 compute-0 nova_compute[185480]: 2026-01-27 19:17:53.858 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:17:55 compute-0 nova_compute[185480]: 2026-01-27 19:17:55.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:55 compute-0 nova_compute[185480]: 2026-01-27 19:17:55.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:57 compute-0 sshd-session[243894]: Accepted publickey for zuul from 38.102.83.144 port 48444 ssh2: RSA SHA256:jhFpR9mdpMGvU2F0q/HJAkqGxozs6TWh9oCwMxPPlpE
Jan 27 19:17:57 compute-0 systemd-logind[795]: New session 29 of user zuul.
Jan 27 19:17:57 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 27 19:17:57 compute-0 sshd-session[243894]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 19:17:57 compute-0 nova_compute[185480]: 2026-01-27 19:17:57.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:17:57 compute-0 nova_compute[185480]: 2026-01-27 19:17:57.875 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:58 compute-0 sudo[244071]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzgnwnajzvopcxbrqbdxvpenahgxwcie ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769541477.4978535-58296-251302265515109/AnsiballZ_command.py'
Jan 27 19:17:58 compute-0 sudo[244071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:17:58 compute-0 ovn_controller[97647]: 2026-01-27T19:17:58Z|00057|memory_trim|INFO|Detected inactivity (last active 30022 ms ago): trimming memory
Jan 27 19:17:58 compute-0 python3[244073]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:17:58 compute-0 sudo[244071]: pam_unix(sudo:session): session closed for user root
Jan 27 19:17:58 compute-0 nova_compute[185480]: 2026-01-27 19:17:58.536 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:17:59 compute-0 podman[244112]: 2026-01-27 19:17:59.326874765 +0000 UTC m=+0.096117292 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 19:17:59 compute-0 podman[201378]: time="2026-01-27T19:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:17:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:17:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 27 19:18:00 compute-0 nova_compute[185480]: 2026-01-27 19:18:00.512 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:01 compute-0 openstack_network_exporter[204477]: ERROR   19:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:18:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:18:01 compute-0 openstack_network_exporter[204477]: ERROR   19:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:18:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:18:02 compute-0 nova_compute[185480]: 2026-01-27 19:18:02.878 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:03 compute-0 nova_compute[185480]: 2026-01-27 19:18:03.539 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:06 compute-0 podman[244133]: 2026-01-27 19:18:06.320783745 +0000 UTC m=+0.095641120 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 27 19:18:07 compute-0 nova_compute[185480]: 2026-01-27 19:18:07.881 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:08 compute-0 nova_compute[185480]: 2026-01-27 19:18:08.542 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:09 compute-0 podman[244153]: 2026-01-27 19:18:09.34944098 +0000 UTC m=+0.115014628 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 27 19:18:09 compute-0 podman[244151]: 2026-01-27 19:18:09.358895648 +0000 UTC m=+0.125023709 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:18:09 compute-0 podman[244152]: 2026-01-27 19:18:09.398631468 +0000 UTC m=+0.157880263 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 19:18:12 compute-0 nova_compute[185480]: 2026-01-27 19:18:12.884 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:13 compute-0 nova_compute[185480]: 2026-01-27 19:18:13.544 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.506 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "1deabd7a-2569-4693-8eb6-b1c19e772784" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.507 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "1deabd7a-2569-4693-8eb6-b1c19e772784" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.525 185484 DEBUG nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.601 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.602 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.617 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.618 185484 INFO nova.compute.claims [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.767 185484 DEBUG nova.compute.provider_tree [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.786 185484 DEBUG nova.scheduler.client.report [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.815 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.816 185484 DEBUG nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.857 185484 DEBUG nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.873 185484 INFO nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.906 185484 DEBUG nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.992 185484 DEBUG nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.993 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.994 185484 INFO nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Creating image(s)
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.994 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.995 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.995 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.996 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "b8c9c7b811378f888275e874dad98d2c3f7ceda3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:14 compute-0 nova_compute[185480]: 2026-01-27 19:18:14.996 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b8c9c7b811378f888275e874dad98d2c3f7ceda3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.268 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.330 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3.part --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.332 185484 DEBUG nova.virt.images [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] 72dec987-2501-4bf5-be24-6e64716f3c93 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.334 185484 DEBUG nova.privsep.utils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.334 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3.part /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.563 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3.part /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3.converted" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.569 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.646 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.647 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b8c9c7b811378f888275e874dad98d2c3f7ceda3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.666 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.720 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.721 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "b8c9c7b811378f888275e874dad98d2c3f7ceda3" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.722 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b8c9c7b811378f888275e874dad98d2c3f7ceda3" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.738 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.800 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.801 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3,backing_fmt=raw /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.850 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3,backing_fmt=raw /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.851 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b8c9c7b811378f888275e874dad98d2c3f7ceda3" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.852 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.926 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b8c9c7b811378f888275e874dad98d2c3f7ceda3 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.928 185484 DEBUG nova.virt.disk.api [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Checking if we can resize image /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.928 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.986 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.987 185484 DEBUG nova.virt.disk.api [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Cannot resize image /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:18:16 compute-0 nova_compute[185480]: 2026-01-27 19:18:16.988 185484 DEBUG nova.objects.instance [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 1deabd7a-2569-4693-8eb6-b1c19e772784 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.008 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.009 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.009 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.022 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.095 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.096 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.097 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.108 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.165 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.166 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.204 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.eph0 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.205 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.205 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.297 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.298 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.298 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Ensure instance console log exists: /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.299 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.299 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.299 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.302 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:18:02Z,direct_url=<?>,disk_format='qcow2',id=72dec987-2501-4bf5-be24-6e64716f3c93,min_disk=0,min_ram=0,name='fvt_testing_image',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:18:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '72dec987-2501-4bf5-be24-6e64716f3c93'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.308 185484 WARNING nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.316 185484 DEBUG nova.virt.libvirt.host [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.317 185484 DEBUG nova.virt.libvirt.host [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.323 185484 DEBUG nova.virt.libvirt.host [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.324 185484 DEBUG nova.virt.libvirt.host [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.324 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.324 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:18:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='390e642c-43b3-4ddc-9c26-787f4f15a1bd',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T19:18:02Z,direct_url=<?>,disk_format='qcow2',id=72dec987-2501-4bf5-be24-6e64716f3c93,min_disk=0,min_ram=0,name='fvt_testing_image',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T19:18:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.325 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.325 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.326 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.326 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.326 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.326 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.326 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.327 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.327 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.327 185484 DEBUG nova.virt.hardware [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.332 185484 DEBUG nova.objects.instance [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1deabd7a-2569-4693-8eb6-b1c19e772784 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.353 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <uuid>1deabd7a-2569-4693-8eb6-b1c19e772784</uuid>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <name>instance-00000005</name>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <memory>524288</memory>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <nova:name>fvt_testing_server</nova:name>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:18:17</nova:creationTime>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <nova:flavor name="fvt_testing_flavor">
Jan 27 19:18:17 compute-0 nova_compute[185480]:         <nova:memory>512</nova:memory>
Jan 27 19:18:17 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:18:17 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:18:17 compute-0 nova_compute[185480]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 19:18:17 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:18:17 compute-0 nova_compute[185480]:         <nova:user uuid="6d30d46dc88a4403b3a241949384d8f7">admin</nova:user>
Jan 27 19:18:17 compute-0 nova_compute[185480]:         <nova:project uuid="f04ec1493db14ca1adbb4b6abd1667b1">admin</nova:project>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="72dec987-2501-4bf5-be24-6e64716f3c93"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <nova:ports/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <system>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <entry name="serial">1deabd7a-2569-4693-8eb6-b1c19e772784</entry>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <entry name="uuid">1deabd7a-2569-4693-8eb6-b1c19e772784</entry>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </system>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <os>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   </os>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <features>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   </features>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.eph0"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <target dev="vdb" bus="virtio"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.config"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/console.log" append="off"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <video>
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </video>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:18:17 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:18:17 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:18:17 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:18:17 compute-0 nova_compute[185480]: </domain>
Jan 27 19:18:17 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.404 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.405 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.405 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.405 185484 INFO nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Using config drive
Jan 27 19:18:17 compute-0 nova_compute[185480]: 2026-01-27 19:18:17.887 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:18 compute-0 nova_compute[185480]: 2026-01-27 19:18:18.078 185484 INFO nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Creating config drive at /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.config
Jan 27 19:18:18 compute-0 nova_compute[185480]: 2026-01-27 19:18:18.084 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3wy7uzq3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:18 compute-0 nova_compute[185480]: 2026-01-27 19:18:18.213 185484 DEBUG oslo_concurrency.processutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3wy7uzq3" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:18 compute-0 systemd-machined[156762]: New machine qemu-5-instance-00000005.
Jan 27 19:18:18 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 27 19:18:18 compute-0 nova_compute[185480]: 2026-01-27 19:18:18.547 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:18 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 19:18:18 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.058 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769541499.0575273, 1deabd7a-2569-4693-8eb6-b1c19e772784 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.058 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] VM Resumed (Lifecycle Event)
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.061 185484 DEBUG nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.062 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.067 185484 INFO nova.virt.libvirt.driver [-] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Instance spawned successfully.
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.068 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.082 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.089 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.104 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.104 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.105 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.105 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.106 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.106 185484 DEBUG nova.virt.libvirt.driver [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.121 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.121 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769541499.0614405, 1deabd7a-2569-4693-8eb6-b1c19e772784 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.121 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] VM Started (Lifecycle Event)
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.154 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.159 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.191 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.219 185484 INFO nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Took 4.23 seconds to spawn the instance on the hypervisor.
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.220 185484 DEBUG nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.306 185484 INFO nova.compute.manager [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Took 4.74 seconds to build instance.
Jan 27 19:18:19 compute-0 nova_compute[185480]: 2026-01-27 19:18:19.331 185484 DEBUG oslo_concurrency.lockutils [None req-aeed746c-369e-46bd-81af-b329f282e1d2 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "1deabd7a-2569-4693-8eb6-b1c19e772784" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:18:20.527 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:18:20.530 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:18:20.531 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:21 compute-0 podman[244309]: 2026-01-27 19:18:21.307150821 +0000 UTC m=+0.072589264 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:18:21 compute-0 podman[244310]: 2026-01-27 19:18:21.338867447 +0000 UTC m=+0.096533921 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 27 19:18:22 compute-0 nova_compute[185480]: 2026-01-27 19:18:22.891 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:23 compute-0 nova_compute[185480]: 2026-01-27 19:18:23.664 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:24 compute-0 podman[244350]: 2026-01-27 19:18:24.290472013 +0000 UTC m=+0.068914776 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, maintainer=Red Hat, Inc., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Jan 27 19:18:27 compute-0 nova_compute[185480]: 2026-01-27 19:18:27.893 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:28 compute-0 nova_compute[185480]: 2026-01-27 19:18:28.666 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:29 compute-0 podman[201378]: time="2026-01-27T19:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:18:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:18:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 27 19:18:30 compute-0 podman[244369]: 2026-01-27 19:18:30.283243306 +0000 UTC m=+0.063262399 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 19:18:31 compute-0 openstack_network_exporter[204477]: ERROR   19:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:18:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:18:31 compute-0 openstack_network_exporter[204477]: ERROR   19:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:18:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.098 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.099 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.107 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'name': 'test_0', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.110 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '92ff85a4-5620-4dd0-8930-62b7f561edf6', 'name': 'vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.112 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1deabd7a-2569-4693-8eb6-b1c19e772784 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.113 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1deabd7a-2569-4693-8eb6-b1c19e772784 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.426 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Tue, 27 Jan 2026 19:18:32 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-af639ae7-c91a-4478-9134-b27e8fb11cd5 x-openstack-request-id: req-af639ae7-c91a-4478-9134-b27e8fb11cd5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.426 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1deabd7a-2569-4693-8eb6-b1c19e772784", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "user_id": "6d30d46dc88a4403b3a241949384d8f7", "metadata": {}, "hostId": "d5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0", "image": {"id": "72dec987-2501-4bf5-be24-6e64716f3c93", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/72dec987-2501-4bf5-be24-6e64716f3c93"}]}, "flavor": {"id": "390e642c-43b3-4ddc-9c26-787f4f15a1bd", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/390e642c-43b3-4ddc-9c26-787f4f15a1bd"}]}, "created": "2026-01-27T19:18:13Z", "updated": "2026-01-27T19:18:19Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1deabd7a-2569-4693-8eb6-b1c19e772784"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1deabd7a-2569-4693-8eb6-b1c19e772784"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T19:18:19.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.426 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1deabd7a-2569-4693-8eb6-b1c19e772784 used request id req-af639ae7-c91a-4478-9134-b27e8fb11cd5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.428 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1deabd7a-2569-4693-8eb6-b1c19e772784', 'name': 'fvt_testing_server', 'flavor': {'id': '390e642c-43b3-4ddc-9c26-787f4f15a1bd', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '72dec987-2501-4bf5-be24-6e64716f3c93'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.428 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:18:32.428961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.465 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 21766144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.466 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.466 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.492 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.493 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.494 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.520 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.521 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.521 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.522 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:18:32.522346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.556 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.579 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/memory.usage volume: 48.98828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.601 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.601 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 1deabd7a-2569-4693-8eb6-b1c19e772784: ceilometer.compute.pollsters.NoVolumeException
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.601 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.602 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:18:32.602115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.602 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.603 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:18:32.603935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.608 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.611 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.614 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.615 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:18:32.614661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.682 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.682 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.682 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.762 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.762 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.762 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.830 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.831 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.831 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.832 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.832 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.833 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:18:32.832416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.833 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.833 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.834 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.834 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:18:32.834141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.835 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.835 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.836 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:18:32.835548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.836 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.836 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.836 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.837 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.837 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.837 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.837 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.838 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.838 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.839 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/cpu volume: 41780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.839 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/cpu volume: 36770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.839 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/cpu volume: 13160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:18:32.838872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.840 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.840 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.841 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.841 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.841 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.841 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.842 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.842 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.842 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.843 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:18:32.840717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.843 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.844 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.844 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.845 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.845 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.845 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.845 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:18:32.844231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.845 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.845 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.845 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 713872381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.846 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 102610265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:18:32.845804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.846 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 79720785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.846 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 724384888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.847 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 110122219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.847 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 191687644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.847 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.latency volume: 754906810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.847 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.848 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.read.latency volume: 20268619 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.848 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.848 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.849 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.849 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.849 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:18:32.849171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.850 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.850 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:18:32.850597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.851 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.851 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.852 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.852 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.852 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.853 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:18:32.852129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.853 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.853 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.854 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.854 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.854 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.855 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.855 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.855 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.856 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.856 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.856 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 2565149834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.857 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 13830550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.857 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.857 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 2519550782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.858 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 10920156 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:18:32.855492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.858 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.858 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:18:32.856821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.858 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.859 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.859 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.860 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.860 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:18:32.860090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.861 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.861 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.861 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.861 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.862 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.862 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.862 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.863 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.863 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.863 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:18:32.861556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.864 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.864 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.865 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:18:32.865040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.866 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T19:18:32.866384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.866 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.867 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.867 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.867 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.867 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.868 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:18:32.867481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:18:32.869018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.869 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.870 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.870 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.870 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.870 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:18:32.870329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.871 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.871 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.871 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.871 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.872 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.872 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.872 14 DEBUG ceilometer.compute.pollsters [-] 1deabd7a-2569-4693-8eb6-b1c19e772784/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.873 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.873 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.873 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes volume: 2256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.874 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.874 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:18:32.873563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.874 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.874 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.874 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.875 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.875 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.875 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.880 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:18:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:18:32.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T19:18:32.875016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:18:32 compute-0 nova_compute[185480]: 2026-01-27 19:18:32.894 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:33 compute-0 nova_compute[185480]: 2026-01-27 19:18:33.669 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.243 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "1deabd7a-2569-4693-8eb6-b1c19e772784" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.245 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "1deabd7a-2569-4693-8eb6-b1c19e772784" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.246 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "1deabd7a-2569-4693-8eb6-b1c19e772784-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.247 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "1deabd7a-2569-4693-8eb6-b1c19e772784-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.247 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "1deabd7a-2569-4693-8eb6-b1c19e772784-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.248 185484 INFO nova.compute.manager [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Terminating instance
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.250 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "refresh_cache-1deabd7a-2569-4693-8eb6-b1c19e772784" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.250 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquired lock "refresh_cache-1deabd7a-2569-4693-8eb6-b1c19e772784" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.251 185484 DEBUG nova.network.neutron [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:18:37 compute-0 podman[244391]: 2026-01-27 19:18:37.318345515 +0000 UTC m=+0.078583819 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute)
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.366 185484 DEBUG nova.network.neutron [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:18:37 compute-0 nova_compute[185480]: 2026-01-27 19:18:37.898 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.057 185484 DEBUG nova.network.neutron [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.078 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Releasing lock "refresh_cache-1deabd7a-2569-4693-8eb6-b1c19e772784" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.080 185484 DEBUG nova.compute.manager [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:18:38 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 27 19:18:38 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.100s CPU time.
Jan 27 19:18:38 compute-0 systemd-machined[156762]: Machine qemu-5-instance-00000005 terminated.
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.378 185484 INFO nova.virt.libvirt.driver [-] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Instance destroyed successfully.
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.379 185484 DEBUG nova.objects.instance [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'resources' on Instance uuid 1deabd7a-2569-4693-8eb6-b1c19e772784 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.391 185484 INFO nova.virt.libvirt.driver [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Deleting instance files /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784_del
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.392 185484 INFO nova.virt.libvirt.driver [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Deletion of /var/lib/nova/instances/1deabd7a-2569-4693-8eb6-b1c19e772784_del complete
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.458 185484 INFO nova.compute.manager [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Took 0.38 seconds to destroy the instance on the hypervisor.
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.459 185484 DEBUG oslo.service.loopingcall [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.460 185484 DEBUG nova.compute.manager [-] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.460 185484 DEBUG nova.network.neutron [-] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:18:38 compute-0 nova_compute[185480]: 2026-01-27 19:18:38.672 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.127 185484 DEBUG nova.network.neutron [-] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.139 185484 DEBUG nova.network.neutron [-] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.160 185484 INFO nova.compute.manager [-] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Took 0.70 seconds to deallocate network for instance.
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.192 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.192 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.312 185484 DEBUG nova.compute.provider_tree [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.354 185484 DEBUG nova.scheduler.client.report [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.391 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.432 185484 INFO nova.scheduler.client.report [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Deleted allocations for instance 1deabd7a-2569-4693-8eb6-b1c19e772784
Jan 27 19:18:39 compute-0 nova_compute[185480]: 2026-01-27 19:18:39.496 185484 DEBUG oslo_concurrency.lockutils [None req-5a196884-8b3a-4ddf-b33e-2da3a7bc6463 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "1deabd7a-2569-4693-8eb6-b1c19e772784" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:40 compute-0 podman[244424]: 2026-01-27 19:18:40.302786593 +0000 UTC m=+0.059667382 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:18:40 compute-0 podman[244422]: 2026-01-27 19:18:40.331489496 +0000 UTC m=+0.094854721 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:18:40 compute-0 podman[244423]: 2026-01-27 19:18:40.35277774 +0000 UTC m=+0.115807157 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 19:18:42 compute-0 nova_compute[185480]: 2026-01-27 19:18:42.899 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:43 compute-0 nova_compute[185480]: 2026-01-27 19:18:43.674 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:44 compute-0 nova_compute[185480]: 2026-01-27 19:18:44.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:44 compute-0 nova_compute[185480]: 2026-01-27 19:18:44.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:18:45 compute-0 nova_compute[185480]: 2026-01-27 19:18:45.122 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:18:45 compute-0 nova_compute[185480]: 2026-01-27 19:18:45.123 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:18:45 compute-0 nova_compute[185480]: 2026-01-27 19:18:45.123 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:18:46 compute-0 nova_compute[185480]: 2026-01-27 19:18:46.500 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updating instance_info_cache with network_info: [{"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:18:46 compute-0 nova_compute[185480]: 2026-01-27 19:18:46.531 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:18:46 compute-0 nova_compute[185480]: 2026-01-27 19:18:46.532 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:18:47 compute-0 nova_compute[185480]: 2026-01-27 19:18:47.902 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:48 compute-0 nova_compute[185480]: 2026-01-27 19:18:48.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:48 compute-0 nova_compute[185480]: 2026-01-27 19:18:48.676 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:50 compute-0 sshd-session[244488]: Invalid user sol from 45.148.10.240 port 59176
Jan 27 19:18:50 compute-0 sshd-session[244488]: Connection closed by invalid user sol 45.148.10.240 port 59176 [preauth]
Jan 27 19:18:52 compute-0 podman[244490]: 2026-01-27 19:18:52.303294374 +0000 UTC m=+0.072969683 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:18:52 compute-0 podman[244491]: 2026-01-27 19:18:52.333276868 +0000 UTC m=+0.102402944 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.539 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.540 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.540 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.540 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.643 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.746 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.747 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.820 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.821 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.880 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.881 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.904 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.940 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:52 compute-0 nova_compute[185480]: 2026-01-27 19:18:52.947 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.008 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.009 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.071 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.073 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.155 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.157 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.217 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.373 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769541518.3714974, 1deabd7a-2569-4693-8eb6-b1c19e772784 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.373 185484 INFO nova.compute.manager [-] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] VM Stopped (Lifecycle Event)
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.394 185484 DEBUG nova.compute.manager [None req-a7937c77-79eb-452c-83cb-9c046d7090f3 - - - - - -] [instance: 1deabd7a-2569-4693-8eb6-b1c19e772784] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.557 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.558 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4891MB free_disk=72.37246322631836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.559 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.559 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.632 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.632 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.633 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.633 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.679 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.697 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.711 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.988 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:18:53 compute-0 nova_compute[185480]: 2026-01-27 19:18:53.990 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:18:54 compute-0 nova_compute[185480]: 2026-01-27 19:18:54.991 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:54 compute-0 nova_compute[185480]: 2026-01-27 19:18:54.992 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:18:55 compute-0 podman[244554]: 2026-01-27 19:18:55.330034233 +0000 UTC m=+0.094495262 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 19:18:55 compute-0 nova_compute[185480]: 2026-01-27 19:18:55.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:56 compute-0 nova_compute[185480]: 2026-01-27 19:18:56.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:57 compute-0 nova_compute[185480]: 2026-01-27 19:18:57.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:57 compute-0 nova_compute[185480]: 2026-01-27 19:18:57.906 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:57 compute-0 sshd-session[243897]: Received disconnect from 38.102.83.144 port 48444:11: disconnected by user
Jan 27 19:18:57 compute-0 sshd-session[243897]: Disconnected from user zuul 38.102.83.144 port 48444
Jan 27 19:18:57 compute-0 sshd-session[243894]: pam_unix(sshd:session): session closed for user zuul
Jan 27 19:18:57 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 27 19:18:57 compute-0 systemd[1]: session-29.scope: Consumed 1.209s CPU time.
Jan 27 19:18:57 compute-0 systemd-logind[795]: Session 29 logged out. Waiting for processes to exit.
Jan 27 19:18:57 compute-0 systemd-logind[795]: Removed session 29.
Jan 27 19:18:58 compute-0 nova_compute[185480]: 2026-01-27 19:18:58.683 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:18:59 compute-0 nova_compute[185480]: 2026-01-27 19:18:59.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:18:59 compute-0 podman[201378]: time="2026-01-27T19:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:18:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:18:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 27 19:19:01 compute-0 podman[244576]: 2026-01-27 19:19:01.330523239 +0000 UTC m=+0.104890284 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, config_id=openstack_network_exporter)
Jan 27 19:19:01 compute-0 openstack_network_exporter[204477]: ERROR   19:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:19:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:19:01 compute-0 openstack_network_exporter[204477]: ERROR   19:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:19:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:19:02 compute-0 nova_compute[185480]: 2026-01-27 19:19:02.911 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:03 compute-0 nova_compute[185480]: 2026-01-27 19:19:03.687 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:07 compute-0 nova_compute[185480]: 2026-01-27 19:19:07.914 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:08 compute-0 podman[244596]: 2026-01-27 19:19:08.309641556 +0000 UTC m=+0.082054943 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 19:19:08 compute-0 nova_compute[185480]: 2026-01-27 19:19:08.689 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:11 compute-0 podman[244617]: 2026-01-27 19:19:11.334653473 +0000 UTC m=+0.088053628 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:19:11 compute-0 podman[244615]: 2026-01-27 19:19:11.342951683 +0000 UTC m=+0.111408850 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 19:19:11 compute-0 podman[244616]: 2026-01-27 19:19:11.369967935 +0000 UTC m=+0.125780807 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 19:19:12 compute-0 nova_compute[185480]: 2026-01-27 19:19:12.917 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:13 compute-0 nova_compute[185480]: 2026-01-27 19:19:13.691 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:16 compute-0 sshd-session[244680]: Accepted publickey for zuul from 38.102.83.144 port 47836 ssh2: RSA SHA256:jhFpR9mdpMGvU2F0q/HJAkqGxozs6TWh9oCwMxPPlpE
Jan 27 19:19:16 compute-0 systemd-logind[795]: New session 30 of user zuul.
Jan 27 19:19:16 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 27 19:19:16 compute-0 sshd-session[244680]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 19:19:17 compute-0 sudo[244857]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcojakrkcupjlsqecqftnatrqoitoxwu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769541557.004329-59061-134852774148757/AnsiballZ_command.py'
Jan 27 19:19:17 compute-0 sudo[244857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:19:17 compute-0 python3[244859]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:19:17 compute-0 sudo[244857]: pam_unix(sudo:session): session closed for user root
Jan 27 19:19:17 compute-0 nova_compute[185480]: 2026-01-27 19:19:17.919 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:18 compute-0 nova_compute[185480]: 2026-01-27 19:19:18.694 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:19:20.528 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:19:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:19:20.529 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:19:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:19:20.530 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:19:22 compute-0 nova_compute[185480]: 2026-01-27 19:19:22.921 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:23 compute-0 podman[244900]: 2026-01-27 19:19:23.315496391 +0000 UTC m=+0.089080122 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 19:19:23 compute-0 podman[244899]: 2026-01-27 19:19:23.326379154 +0000 UTC m=+0.103092720 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:19:23 compute-0 nova_compute[185480]: 2026-01-27 19:19:23.697 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:26 compute-0 sudo[245130]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgfbpkrasisdvhrxyvatkpyxwqrguwjp ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769541565.4468117-59228-198676460999854/AnsiballZ_command.py'
Jan 27 19:19:26 compute-0 sudo[245130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:19:26 compute-0 podman[245091]: 2026-01-27 19:19:26.234903906 +0000 UTC m=+0.156629673 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 19:19:26 compute-0 python3[245136]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:19:26 compute-0 sudo[245130]: pam_unix(sudo:session): session closed for user root
Jan 27 19:19:27 compute-0 nova_compute[185480]: 2026-01-27 19:19:27.924 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:28 compute-0 nova_compute[185480]: 2026-01-27 19:19:28.699 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:29 compute-0 podman[201378]: time="2026-01-27T19:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:19:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:19:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 27 19:19:31 compute-0 openstack_network_exporter[204477]: ERROR   19:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:19:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:19:31 compute-0 openstack_network_exporter[204477]: ERROR   19:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:19:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:19:32 compute-0 podman[245176]: 2026-01-27 19:19:32.354061079 +0000 UTC m=+0.120564883 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:19:32 compute-0 nova_compute[185480]: 2026-01-27 19:19:32.926 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:33 compute-0 nova_compute[185480]: 2026-01-27 19:19:33.702 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:36 compute-0 sudo[245369]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omucpmvofqxodmlqqzwwwyaxzdzujazm ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769541575.6967216-59386-189667323192278/AnsiballZ_command.py'
Jan 27 19:19:36 compute-0 sudo[245369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:19:36 compute-0 python3[245371]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:19:36 compute-0 sudo[245369]: pam_unix(sudo:session): session closed for user root
Jan 27 19:19:37 compute-0 nova_compute[185480]: 2026-01-27 19:19:37.928 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:38 compute-0 nova_compute[185480]: 2026-01-27 19:19:38.705 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:39 compute-0 podman[245410]: 2026-01-27 19:19:39.352374357 +0000 UTC m=+0.116593076 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 27 19:19:42 compute-0 podman[245429]: 2026-01-27 19:19:42.334632112 +0000 UTC m=+0.116787321 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:19:42 compute-0 podman[245431]: 2026-01-27 19:19:42.344769126 +0000 UTC m=+0.111218065 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:19:42 compute-0 podman[245430]: 2026-01-27 19:19:42.370003096 +0000 UTC m=+0.151355546 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, container_name=ovn_controller)
Jan 27 19:19:42 compute-0 nova_compute[185480]: 2026-01-27 19:19:42.933 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:43 compute-0 nova_compute[185480]: 2026-01-27 19:19:43.708 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:46 compute-0 nova_compute[185480]: 2026-01-27 19:19:46.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:19:46 compute-0 nova_compute[185480]: 2026-01-27 19:19:46.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:19:46 compute-0 nova_compute[185480]: 2026-01-27 19:19:46.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:19:47 compute-0 nova_compute[185480]: 2026-01-27 19:19:47.233 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:19:47 compute-0 nova_compute[185480]: 2026-01-27 19:19:47.234 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:19:47 compute-0 nova_compute[185480]: 2026-01-27 19:19:47.234 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:19:47 compute-0 nova_compute[185480]: 2026-01-27 19:19:47.235 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:19:47 compute-0 nova_compute[185480]: 2026-01-27 19:19:47.937 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:48 compute-0 nova_compute[185480]: 2026-01-27 19:19:48.711 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:48 compute-0 nova_compute[185480]: 2026-01-27 19:19:48.747 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:19:48 compute-0 nova_compute[185480]: 2026-01-27 19:19:48.770 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:19:48 compute-0 nova_compute[185480]: 2026-01-27 19:19:48.771 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:19:49 compute-0 nova_compute[185480]: 2026-01-27 19:19:49.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.545 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.545 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.546 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.546 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:19:52 compute-0 sudo[245669]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebednxxymkpkuprpgqcaropwcyfxblyh ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769541591.9088335-59611-3275994592895/AnsiballZ_command.py'
Jan 27 19:19:52 compute-0 sudo[245669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.668 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.742 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.743 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:19:52 compute-0 python3[245671]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.844 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.846 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:19:52 compute-0 sudo[245669]: pam_unix(sudo:session): session closed for user root
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.924 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.926 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.948 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:52 compute-0 nova_compute[185480]: 2026-01-27 19:19:52.997 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.005 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.069 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.071 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.144 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.145 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.204 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.205 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.263 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.592 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.593 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4881MB free_disk=72.37203979492188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.594 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.594 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.671 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.672 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.672 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.673 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.714 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.772 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.790 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.792 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:19:53 compute-0 nova_compute[185480]: 2026-01-27 19:19:53.793 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:19:54 compute-0 podman[245734]: 2026-01-27 19:19:54.345257181 +0000 UTC m=+0.101767458 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:19:54 compute-0 podman[245735]: 2026-01-27 19:19:54.371581767 +0000 UTC m=+0.124759313 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 19:19:54 compute-0 nova_compute[185480]: 2026-01-27 19:19:54.787 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:19:54 compute-0 nova_compute[185480]: 2026-01-27 19:19:54.788 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:19:54 compute-0 nova_compute[185480]: 2026-01-27 19:19:54.788 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:19:56 compute-0 nova_compute[185480]: 2026-01-27 19:19:56.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:19:57 compute-0 podman[245776]: 2026-01-27 19:19:57.330834128 +0000 UTC m=+0.099658467 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., version=9.4, name=ubi9, config_id=kepler, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Jan 27 19:19:57 compute-0 nova_compute[185480]: 2026-01-27 19:19:57.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:19:57 compute-0 nova_compute[185480]: 2026-01-27 19:19:57.942 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:58 compute-0 nova_compute[185480]: 2026-01-27 19:19:58.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:19:58 compute-0 nova_compute[185480]: 2026-01-27 19:19:58.716 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:19:59 compute-0 podman[201378]: time="2026-01-27T19:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:19:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:19:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Jan 27 19:20:01 compute-0 openstack_network_exporter[204477]: ERROR   19:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:20:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:20:01 compute-0 openstack_network_exporter[204477]: ERROR   19:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:20:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:20:01 compute-0 nova_compute[185480]: 2026-01-27 19:20:01.511 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:01 compute-0 nova_compute[185480]: 2026-01-27 19:20:01.534 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:02 compute-0 nova_compute[185480]: 2026-01-27 19:20:02.944 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:03 compute-0 podman[245796]: 2026-01-27 19:20:03.312860122 +0000 UTC m=+0.078835985 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 27 19:20:03 compute-0 nova_compute[185480]: 2026-01-27 19:20:03.720 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:07 compute-0 nova_compute[185480]: 2026-01-27 19:20:07.947 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:08 compute-0 nova_compute[185480]: 2026-01-27 19:20:08.723 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:10 compute-0 podman[245817]: 2026-01-27 19:20:10.311357106 +0000 UTC m=+0.087931224 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 19:20:12 compute-0 nova_compute[185480]: 2026-01-27 19:20:12.951 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:13 compute-0 podman[245836]: 2026-01-27 19:20:13.342415857 +0000 UTC m=+0.119521746 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:20:13 compute-0 podman[245838]: 2026-01-27 19:20:13.349936218 +0000 UTC m=+0.105508188 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 19:20:13 compute-0 podman[245837]: 2026-01-27 19:20:13.396517964 +0000 UTC m=+0.155248890 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 27 19:20:13 compute-0 nova_compute[185480]: 2026-01-27 19:20:13.726 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:17 compute-0 nova_compute[185480]: 2026-01-27 19:20:17.953 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:18 compute-0 nova_compute[185480]: 2026-01-27 19:20:18.729 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:20:20.529 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:20:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:20:20.530 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:20:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:20:20.531 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:20:21 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 19:20:22 compute-0 nova_compute[185480]: 2026-01-27 19:20:22.956 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:23 compute-0 nova_compute[185480]: 2026-01-27 19:20:23.733 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:25 compute-0 podman[245906]: 2026-01-27 19:20:25.331608042 +0000 UTC m=+0.110112070 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:20:25 compute-0 podman[245907]: 2026-01-27 19:20:25.331998441 +0000 UTC m=+0.098900418 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 19:20:27 compute-0 nova_compute[185480]: 2026-01-27 19:20:27.958 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:28 compute-0 podman[245946]: 2026-01-27 19:20:28.386202074 +0000 UTC m=+0.155516077 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, config_id=kepler, distribution-scope=public, version=9.4, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 19:20:28 compute-0 nova_compute[185480]: 2026-01-27 19:20:28.736 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:29 compute-0 podman[201378]: time="2026-01-27T19:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:20:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:20:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 27 19:20:31 compute-0 openstack_network_exporter[204477]: ERROR   19:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:20:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:20:31 compute-0 openstack_network_exporter[204477]: ERROR   19:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:20:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.100 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.101 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75dae26f30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.112 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'name': 'test_0', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.118 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '92ff85a4-5620-4dd0-8930-62b7f561edf6', 'name': 'vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf', 'flavor': {'id': 'bc7c8c58-0a2b-4396-9f89-7ff8e35afa36', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '525193b7-cb5a-4d63-9747-3b917622bbe3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'user_id': '6d30d46dc88a4403b3a241949384d8f7', 'hostId': 'd5bc440086b1eb77fd98787be9d6be1e7b3485cc2cb3004bec06f9f0', 'status': 'active', 'metadata': {'metering.server_group': 'bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:20:32.119472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.160 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 21766144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.160 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.160 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.198 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.198 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.198 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.199 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:20:32.200119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.224 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.260 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/memory.usage volume: 48.98828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.261 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.262 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:20:32.262040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.263 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.264 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:20:32.264612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.271 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.276 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.278 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:20:32.278553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.381 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.382 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.382 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.491 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.491 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.492 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.494 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.494 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.496 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:20:32.494046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.497 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.drop volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:20:32.496740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.497 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.498 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.499 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.500 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.500 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.500 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:20:32.499282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.501 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.501 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.502 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.502 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/cpu volume: 43350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.502 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/cpu volume: 38360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:20:32.502396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.504 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:20:32.504034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.504 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.504 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.505 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.505 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.505 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.507 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.507 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:20:32.506946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.508 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.508 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 713872381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.509 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 102610265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.509 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.read.latency volume: 79720785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.509 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 724384888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:20:32.508785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.510 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 110122219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.510 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.read.latency volume: 191687644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.512 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.512 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.513 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:20:32.511947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.514 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:20:32.514230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.514 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.515 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:20:32.515731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.516 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.516 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.516 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.517 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.517 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.517 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.518 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.518 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.518 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:20:32.518181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.519 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.519 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 2565149834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:20:32.519617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.520 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 13830550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.520 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.520 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 2519550782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.520 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 10920156 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.521 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.521 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.522 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:20:32.522114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.522 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.523 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.523 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.523 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:20:32.523531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.524 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.524 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.524 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.524 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.525 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.525 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.527 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:20:32.526014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.527 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:20:32.527403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.527 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.528 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:20:32.528804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.530 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:20:32.530007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.530 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.530 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.531 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.531 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.531 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.532 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.532 14 DEBUG ceilometer.compute.pollsters [-] b6b280bb-d859-43f3-836a-f93d00510948/network.incoming.bytes volume: 2256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.532 14 DEBUG ceilometer.compute.pollsters [-] 92ff85a4-5620-4dd0-8930-62b7f561edf6/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:20:32.532424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:20:32.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:20:32 compute-0 nova_compute[185480]: 2026-01-27 19:20:32.960 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:33 compute-0 nova_compute[185480]: 2026-01-27 19:20:33.739 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:34 compute-0 podman[245966]: 2026-01-27 19:20:34.333051041 +0000 UTC m=+0.107386627 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Jan 27 19:20:37 compute-0 nova_compute[185480]: 2026-01-27 19:20:37.963 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:38 compute-0 nova_compute[185480]: 2026-01-27 19:20:38.741 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:41 compute-0 podman[245988]: 2026-01-27 19:20:41.36555938 +0000 UTC m=+0.126715053 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 19:20:42 compute-0 nova_compute[185480]: 2026-01-27 19:20:42.967 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:43 compute-0 nova_compute[185480]: 2026-01-27 19:20:43.744 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:44 compute-0 podman[246008]: 2026-01-27 19:20:44.341046992 +0000 UTC m=+0.090810098 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 19:20:44 compute-0 podman[246006]: 2026-01-27 19:20:44.355320076 +0000 UTC m=+0.119471278 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:20:44 compute-0 podman[246007]: 2026-01-27 19:20:44.388461704 +0000 UTC m=+0.145237788 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 19:20:45 compute-0 nova_compute[185480]: 2026-01-27 19:20:45.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:45 compute-0 nova_compute[185480]: 2026-01-27 19:20:45.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 19:20:45 compute-0 nova_compute[185480]: 2026-01-27 19:20:45.530 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 19:20:46 compute-0 nova_compute[185480]: 2026-01-27 19:20:46.530 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:46 compute-0 nova_compute[185480]: 2026-01-27 19:20:46.531 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:20:47 compute-0 nova_compute[185480]: 2026-01-27 19:20:47.279 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:20:47 compute-0 nova_compute[185480]: 2026-01-27 19:20:47.280 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:20:47 compute-0 nova_compute[185480]: 2026-01-27 19:20:47.281 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:20:47 compute-0 nova_compute[185480]: 2026-01-27 19:20:47.969 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:48 compute-0 sshd-session[246073]: Received disconnect from 45.148.10.151 port 25452:11:  [preauth]
Jan 27 19:20:48 compute-0 sshd-session[246073]: Disconnected from authenticating user root 45.148.10.151 port 25452 [preauth]
Jan 27 19:20:48 compute-0 nova_compute[185480]: 2026-01-27 19:20:48.700 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updating instance_info_cache with network_info: [{"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:20:48 compute-0 nova_compute[185480]: 2026-01-27 19:20:48.724 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:20:48 compute-0 nova_compute[185480]: 2026-01-27 19:20:48.725 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:20:48 compute-0 nova_compute[185480]: 2026-01-27 19:20:48.747 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:51 compute-0 nova_compute[185480]: 2026-01-27 19:20:51.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:52 compute-0 nova_compute[185480]: 2026-01-27 19:20:52.973 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:52 compute-0 sshd-session[244683]: Received disconnect from 38.102.83.144 port 47836:11: disconnected by user
Jan 27 19:20:52 compute-0 sshd-session[244683]: Disconnected from user zuul 38.102.83.144 port 47836
Jan 27 19:20:52 compute-0 sshd-session[244680]: pam_unix(sshd:session): session closed for user zuul
Jan 27 19:20:52 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 27 19:20:52 compute-0 systemd[1]: session-30.scope: Consumed 4.576s CPU time.
Jan 27 19:20:52 compute-0 systemd-logind[795]: Session 30 logged out. Waiting for processes to exit.
Jan 27 19:20:52 compute-0 systemd-logind[795]: Removed session 30.
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.549 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.550 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.550 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.550 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.636 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.735 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.736 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.753 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.795 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.796 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.858 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.860 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.923 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.933 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.994 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:20:53 compute-0 nova_compute[185480]: 2026-01-27 19:20:53.996 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.057 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.059 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.132 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.134 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.211 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.613 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.615 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4884MB free_disk=72.3724365234375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.615 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.615 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.816 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.817 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.817 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.817 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.921 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing inventories for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.993 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating ProviderTree inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 19:20:54 compute-0 nova_compute[185480]: 2026-01-27 19:20:54.994 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:20:55 compute-0 nova_compute[185480]: 2026-01-27 19:20:55.026 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing aggregate associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 19:20:55 compute-0 nova_compute[185480]: 2026-01-27 19:20:55.047 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing trait associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, traits: HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 19:20:55 compute-0 nova_compute[185480]: 2026-01-27 19:20:55.108 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:20:56 compute-0 podman[246100]: 2026-01-27 19:20:56.355387347 +0000 UTC m=+0.122293085 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:20:56 compute-0 podman[246101]: 2026-01-27 19:20:56.374792765 +0000 UTC m=+0.133378233 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 19:20:57 compute-0 nova_compute[185480]: 2026-01-27 19:20:57.109 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:20:57 compute-0 nova_compute[185480]: 2026-01-27 19:20:57.111 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:20:57 compute-0 nova_compute[185480]: 2026-01-27 19:20:57.111 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:20:57 compute-0 nova_compute[185480]: 2026-01-27 19:20:57.978 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:58 compute-0 nova_compute[185480]: 2026-01-27 19:20:58.757 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:20:59 compute-0 nova_compute[185480]: 2026-01-27 19:20:59.106 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:59 compute-0 nova_compute[185480]: 2026-01-27 19:20:59.107 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:59 compute-0 nova_compute[185480]: 2026-01-27 19:20:59.108 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:59 compute-0 nova_compute[185480]: 2026-01-27 19:20:59.109 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:59 compute-0 nova_compute[185480]: 2026-01-27 19:20:59.109 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:20:59 compute-0 podman[246143]: 2026-01-27 19:20:59.302476976 +0000 UTC m=+0.082191189 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 27 19:20:59 compute-0 nova_compute[185480]: 2026-01-27 19:20:59.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:59 compute-0 nova_compute[185480]: 2026-01-27 19:20:59.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:20:59 compute-0 podman[201378]: time="2026-01-27T19:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:20:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:20:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 27 19:21:00 compute-0 nova_compute[185480]: 2026-01-27 19:21:00.552 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:21:00 compute-0 nova_compute[185480]: 2026-01-27 19:21:00.554 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 19:21:01 compute-0 openstack_network_exporter[204477]: ERROR   19:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:21:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:21:01 compute-0 openstack_network_exporter[204477]: ERROR   19:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:21:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:21:01 compute-0 nova_compute[185480]: 2026-01-27 19:21:01.564 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:21:02 compute-0 nova_compute[185480]: 2026-01-27 19:21:02.982 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:03 compute-0 nova_compute[185480]: 2026-01-27 19:21:03.760 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:05 compute-0 podman[246164]: 2026-01-27 19:21:05.328262213 +0000 UTC m=+0.093330539 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, version=9.6, architecture=x86_64)
Jan 27 19:21:07 compute-0 nova_compute[185480]: 2026-01-27 19:21:07.984 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:08 compute-0 sshd-session[246184]: Invalid user sol from 45.148.10.240 port 54160
Jan 27 19:21:08 compute-0 nova_compute[185480]: 2026-01-27 19:21:08.763 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:08 compute-0 sshd-session[246184]: Connection closed by invalid user sol 45.148.10.240 port 54160 [preauth]
Jan 27 19:21:12 compute-0 podman[246186]: 2026-01-27 19:21:12.38643604 +0000 UTC m=+0.144461810 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 19:21:12 compute-0 nova_compute[185480]: 2026-01-27 19:21:12.986 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:13 compute-0 nova_compute[185480]: 2026-01-27 19:21:13.766 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:14 compute-0 podman[246208]: 2026-01-27 19:21:14.785584064 +0000 UTC m=+0.073336528 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 19:21:14 compute-0 podman[246206]: 2026-01-27 19:21:14.81620783 +0000 UTC m=+0.113819081 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:21:14 compute-0 podman[246207]: 2026-01-27 19:21:14.838964249 +0000 UTC m=+0.128831823 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 27 19:21:17 compute-0 nova_compute[185480]: 2026-01-27 19:21:17.989 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:18 compute-0 nova_compute[185480]: 2026-01-27 19:21:18.770 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:20.530 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:21:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:20.530 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:21:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:20.532 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:21:22 compute-0 nova_compute[185480]: 2026-01-27 19:21:22.992 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:23 compute-0 nova_compute[185480]: 2026-01-27 19:21:23.774 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:27 compute-0 podman[246273]: 2026-01-27 19:21:27.326879798 +0000 UTC m=+0.089800973 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 19:21:27 compute-0 podman[246272]: 2026-01-27 19:21:27.354313319 +0000 UTC m=+0.114888608 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:21:27 compute-0 nova_compute[185480]: 2026-01-27 19:21:27.995 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:28 compute-0 nova_compute[185480]: 2026-01-27 19:21:28.777 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:29 compute-0 podman[201378]: time="2026-01-27T19:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:21:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:21:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 27 19:21:30 compute-0 podman[246313]: 2026-01-27 19:21:30.359750883 +0000 UTC m=+0.125765230 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, name=ubi9, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 19:21:31 compute-0 openstack_network_exporter[204477]: ERROR   19:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:21:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:21:31 compute-0 openstack_network_exporter[204477]: ERROR   19:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:21:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:21:32 compute-0 nova_compute[185480]: 2026-01-27 19:21:32.999 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:33 compute-0 nova_compute[185480]: 2026-01-27 19:21:33.780 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:36 compute-0 podman[246333]: 2026-01-27 19:21:36.344412388 +0000 UTC m=+0.118941565 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, release=1755695350, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 19:21:38 compute-0 nova_compute[185480]: 2026-01-27 19:21:38.001 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:38 compute-0 nova_compute[185480]: 2026-01-27 19:21:38.784 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:43 compute-0 nova_compute[185480]: 2026-01-27 19:21:43.004 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:43 compute-0 podman[246354]: 2026-01-27 19:21:43.300444345 +0000 UTC m=+0.080299915 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 19:21:43 compute-0 nova_compute[185480]: 2026-01-27 19:21:43.787 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:45 compute-0 podman[246372]: 2026-01-27 19:21:45.34876017 +0000 UTC m=+0.104338173 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:21:45 compute-0 podman[246374]: 2026-01-27 19:21:45.35125092 +0000 UTC m=+0.094123207 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 19:21:45 compute-0 podman[246373]: 2026-01-27 19:21:45.384280676 +0000 UTC m=+0.140564396 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 19:21:46 compute-0 nova_compute[185480]: 2026-01-27 19:21:46.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:21:46 compute-0 nova_compute[185480]: 2026-01-27 19:21:46.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:21:46 compute-0 nova_compute[185480]: 2026-01-27 19:21:46.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:21:46 compute-0 nova_compute[185480]: 2026-01-27 19:21:46.726 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:21:46 compute-0 nova_compute[185480]: 2026-01-27 19:21:46.727 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:21:46 compute-0 nova_compute[185480]: 2026-01-27 19:21:46.728 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:21:46 compute-0 nova_compute[185480]: 2026-01-27 19:21:46.729 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:21:48 compute-0 nova_compute[185480]: 2026-01-27 19:21:48.007 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:48 compute-0 nova_compute[185480]: 2026-01-27 19:21:48.611 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [{"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:21:48 compute-0 nova_compute[185480]: 2026-01-27 19:21:48.630 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-b6b280bb-d859-43f3-836a-f93d00510948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:21:48 compute-0 nova_compute[185480]: 2026-01-27 19:21:48.631 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:21:48 compute-0 nova_compute[185480]: 2026-01-27 19:21:48.791 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:51 compute-0 nova_compute[185480]: 2026-01-27 19:21:51.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:21:53 compute-0 nova_compute[185480]: 2026-01-27 19:21:53.010 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:53 compute-0 nova_compute[185480]: 2026-01-27 19:21:53.794 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.547 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.548 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.549 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.549 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.710 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.805 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.808 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.893 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.895 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.982 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:21:55 compute-0 nova_compute[185480]: 2026-01-27 19:21:55.984 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.054 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.068 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.142 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.145 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.226 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.228 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.302 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.304 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.383 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.776 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.776 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.777 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.777 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.777 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.779 185484 INFO nova.compute.manager [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Terminating instance
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.781 185484 DEBUG nova.compute.manager [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:21:56 compute-0 kernel: tap31c187f6-64 (unregistering): left promiscuous mode
Jan 27 19:21:56 compute-0 NetworkManager[56191]: <info>  [1769541716.8373] device (tap31c187f6-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 19:21:56 compute-0 ovn_controller[97647]: 2026-01-27T19:21:56Z|00058|binding|INFO|Releasing lport 31c187f6-645a-4415-a7a2-7c358adeb7c3 from this chassis (sb_readonly=0)
Jan 27 19:21:56 compute-0 ovn_controller[97647]: 2026-01-27T19:21:56Z|00059|binding|INFO|Setting lport 31c187f6-645a-4415-a7a2-7c358adeb7c3 down in Southbound
Jan 27 19:21:56 compute-0 ovn_controller[97647]: 2026-01-27T19:21:56Z|00060|binding|INFO|Removing iface tap31c187f6-64 ovn-installed in OVS
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.851 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.854 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.870 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:04:d7 192.168.0.191'], port_security=['fa:16:3e:dc:04:d7 192.168.0.191'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-z6txynvcwoqi-6l35xl64nude-6efi37qfmun5-port-ynxgvvttwcdb', 'neutron:cidrs': '192.168.0.191/24', 'neutron:device_id': '92ff85a4-5620-4dd0-8930-62b7f561edf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-z6txynvcwoqi-6l35xl64nude-6efi37qfmun5-port-ynxgvvttwcdb', 'neutron:project_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a99badb-bb64-4e2f-95a8-78f317eb6676', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.228', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33ebbec8-56f4-45ac-84a6-f80dd4a7c167, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=31c187f6-645a-4415-a7a2-7c358adeb7c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.869 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.872 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.871 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 31c187f6-645a-4415-a7a2-7c358adeb7c3 in datapath 4f32262d-dee8-406b-8a5a-09e95f48c8d5 unbound from our chassis
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.874 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4890MB free_disk=72.3724365234375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.874 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.874 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.877 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f32262d-dee8-406b-8a5a-09e95f48c8d5
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.899 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd12034-cd98-4b19-9464-f8b165ce3588]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:21:56 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 27 19:21:56 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 52.356s CPU time.
Jan 27 19:21:56 compute-0 systemd-machined[156762]: Machine qemu-4-instance-00000004 terminated.
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.928 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[05fd78b1-4f50-4a47-821b-0f13001e1acc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.931 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[f681e190-9110-44de-9631-98e53fe43533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.955 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance b6b280bb-d859-43f3-836a-f93d00510948 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.956 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 92ff85a4-5620-4dd0-8930-62b7f561edf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.957 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.957 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.958 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcdc0ca-4f7b-4ad7-9759-a19316a21c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.975 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[40267387-699c-4650-bb7f-acb8fdab888d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f32262d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:bd:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383551, 'reachable_time': 20803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246477, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.990 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[385bb768-ba6f-4c0e-89df-750d571442b9]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383562, 'tstamp': 383562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246478, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4f32262d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383566, 'tstamp': 383566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246478, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:21:56 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:56.992 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f32262d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:21:56 compute-0 nova_compute[185480]: 2026-01-27 19:21:56.994 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:57 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:57.003 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f32262d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.003 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:57 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:57.003 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:21:57 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:57.003 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f32262d-d0, col_values=(('external_ids', {'iface-id': '5950ebf0-6d13-4405-b07d-fec152665bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:21:57 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:57.004 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.017 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.024 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.035 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.049 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.073 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.073 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.082 185484 INFO nova.virt.libvirt.driver [-] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Instance destroyed successfully.
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.083 185484 DEBUG nova.objects.instance [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'resources' on Instance uuid 92ff85a4-5620-4dd0-8930-62b7f561edf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.095 185484 DEBUG nova.virt.libvirt.vif [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:11:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-nvcwoqi-6l35xl64nude-6efi37qfmun5-vnf-ysy67xkjsmsf',id=4,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T19:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='bcbb15ba-0a8c-4e68-9a0e-d4209b3d9871'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-jdtvuyp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T19:11:43Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjc1MzIxOTU4MjkzNjMyODI4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 19:21:57 compute-0 nova_compute[185480]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjc1MzIxOTU4MjkzNjMyODI4Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI3NTMyMTk1ODI5MzYzMjgyODM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNzUzMjE5NTgyOTM2MzI4MjgzPT0tLQo=',user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=92ff85a4-5620-4dd0-8930-62b7f561edf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.095 185484 DEBUG nova.network.os_vif_util [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "address": "fa:16:3e:dc:04:d7", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.191", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31c187f6-64", "ovs_interfaceid": "31c187f6-645a-4415-a7a2-7c358adeb7c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.096 185484 DEBUG nova.network.os_vif_util [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:04:d7,bridge_name='br-int',has_traffic_filtering=True,id=31c187f6-645a-4415-a7a2-7c358adeb7c3,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31c187f6-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.097 185484 DEBUG os_vif [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:04:d7,bridge_name='br-int',has_traffic_filtering=True,id=31c187f6-645a-4415-a7a2-7c358adeb7c3,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31c187f6-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.099 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.100 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31c187f6-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.102 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.104 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.107 185484 INFO os_vif [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:04:d7,bridge_name='br-int',has_traffic_filtering=True,id=31c187f6-645a-4415-a7a2-7c358adeb7c3,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31c187f6-64')
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.108 185484 INFO nova.virt.libvirt.driver [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Deleting instance files /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6_del
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.109 185484 INFO nova.virt.libvirt.driver [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Deletion of /var/lib/nova/instances/92ff85a4-5620-4dd0-8930-62b7f561edf6_del complete
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.155 185484 INFO nova.compute.manager [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.156 185484 DEBUG oslo.service.loopingcall [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.156 185484 DEBUG nova.compute.manager [-] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.157 185484 DEBUG nova.network.neutron [-] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:21:57 compute-0 rsyslogd[235877]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 19:21:57.095 185484 DEBUG nova.virt.libvirt.vif [None req-9d95ea30-fa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.563 185484 DEBUG nova.compute.manager [req-37bffe4a-7b40-4268-9a25-470cfc73555b req-981f4f6c-7e98-4b0e-8789-bc93331b75c2 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received event network-vif-unplugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.564 185484 DEBUG oslo_concurrency.lockutils [req-37bffe4a-7b40-4268-9a25-470cfc73555b req-981f4f6c-7e98-4b0e-8789-bc93331b75c2 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.564 185484 DEBUG oslo_concurrency.lockutils [req-37bffe4a-7b40-4268-9a25-470cfc73555b req-981f4f6c-7e98-4b0e-8789-bc93331b75c2 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.565 185484 DEBUG oslo_concurrency.lockutils [req-37bffe4a-7b40-4268-9a25-470cfc73555b req-981f4f6c-7e98-4b0e-8789-bc93331b75c2 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.565 185484 DEBUG nova.compute.manager [req-37bffe4a-7b40-4268-9a25-470cfc73555b req-981f4f6c-7e98-4b0e-8789-bc93331b75c2 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] No waiting events found dispatching network-vif-unplugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.566 185484 DEBUG nova.compute.manager [req-37bffe4a-7b40-4268-9a25-470cfc73555b req-981f4f6c-7e98-4b0e-8789-bc93331b75c2 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received event network-vif-unplugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 19:21:57 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:57.815 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:21:57 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:21:57.816 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:21:57 compute-0 nova_compute[185480]: 2026-01-27 19:21:57.816 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:58 compute-0 nova_compute[185480]: 2026-01-27 19:21:58.012 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:21:58 compute-0 nova_compute[185480]: 2026-01-27 19:21:58.069 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:21:58 compute-0 nova_compute[185480]: 2026-01-27 19:21:58.069 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:21:58 compute-0 nova_compute[185480]: 2026-01-27 19:21:58.070 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:21:58 compute-0 podman[246500]: 2026-01-27 19:21:58.32249423 +0000 UTC m=+0.091732490 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:21:58 compute-0 podman[246501]: 2026-01-27 19:21:58.336702892 +0000 UTC m=+0.101458254 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.317 185484 DEBUG nova.network.neutron [-] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.422 185484 INFO nova.compute.manager [-] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Took 2.27 seconds to deallocate network for instance.
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.486 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.487 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.552 185484 DEBUG nova.compute.provider_tree [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.574 185484 DEBUG nova.scheduler.client.report [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.677 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.719 185484 DEBUG nova.compute.manager [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received event network-changed-31c187f6-645a-4415-a7a2-7c358adeb7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.719 185484 DEBUG nova.compute.manager [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Refreshing instance network info cache due to event network-changed-31c187f6-645a-4415-a7a2-7c358adeb7c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.720 185484 DEBUG oslo_concurrency.lockutils [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.720 185484 DEBUG oslo_concurrency.lockutils [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.720 185484 DEBUG nova.network.neutron [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Refreshing network info cache for port 31c187f6-645a-4415-a7a2-7c358adeb7c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:21:59 compute-0 podman[201378]: time="2026-01-27T19:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:21:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:21:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.789 185484 INFO nova.scheduler.client.report [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Deleted allocations for instance 92ff85a4-5620-4dd0-8930-62b7f561edf6
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.908 185484 DEBUG oslo_concurrency.lockutils [None req-9d95ea30-fa28-45ea-ab62-c3ff364ac17a 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:21:59 compute-0 nova_compute[185480]: 2026-01-27 19:21:59.977 185484 DEBUG nova.network.neutron [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.323 185484 DEBUG nova.network.neutron [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.344 185484 DEBUG oslo_concurrency.lockutils [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-92ff85a4-5620-4dd0-8930-62b7f561edf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.345 185484 DEBUG nova.compute.manager [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received event network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.345 185484 DEBUG oslo_concurrency.lockutils [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.345 185484 DEBUG oslo_concurrency.lockutils [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.345 185484 DEBUG oslo_concurrency.lockutils [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "92ff85a4-5620-4dd0-8930-62b7f561edf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.345 185484 DEBUG nova.compute.manager [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] No waiting events found dispatching network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.346 185484 WARNING nova.compute.manager [req-0a052256-8e12-424e-870b-6eeca75f310d req-6357568e-7e7d-44d8-8626-a88c6382053d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Received unexpected event network-vif-plugged-31c187f6-645a-4415-a7a2-7c358adeb7c3 for instance with vm_state deleted and task_state None.
Jan 27 19:22:00 compute-0 nova_compute[185480]: 2026-01-27 19:22:00.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:01 compute-0 podman[246544]: 2026-01-27 19:22:01.323754673 +0000 UTC m=+0.089097697 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, config_id=kepler, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=)
Jan 27 19:22:01 compute-0 openstack_network_exporter[204477]: ERROR   19:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:22:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:22:01 compute-0 openstack_network_exporter[204477]: ERROR   19:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:22:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:22:01 compute-0 nova_compute[185480]: 2026-01-27 19:22:01.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:01 compute-0 nova_compute[185480]: 2026-01-27 19:22:01.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:02 compute-0 nova_compute[185480]: 2026-01-27 19:22:02.102 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:02.819 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:22:03 compute-0 nova_compute[185480]: 2026-01-27 19:22:03.014 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:05 compute-0 nova_compute[185480]: 2026-01-27 19:22:05.511 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:07 compute-0 nova_compute[185480]: 2026-01-27 19:22:07.105 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:07 compute-0 podman[246565]: 2026-01-27 19:22:07.370215316 +0000 UTC m=+0.131994869 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, config_id=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 27 19:22:08 compute-0 nova_compute[185480]: 2026-01-27 19:22:08.016 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:12 compute-0 nova_compute[185480]: 2026-01-27 19:22:12.080 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769541717.0788367, 92ff85a4-5620-4dd0-8930-62b7f561edf6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:22:12 compute-0 nova_compute[185480]: 2026-01-27 19:22:12.081 185484 INFO nova.compute.manager [-] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] VM Stopped (Lifecycle Event)
Jan 27 19:22:12 compute-0 nova_compute[185480]: 2026-01-27 19:22:12.105 185484 DEBUG nova.compute.manager [None req-ae2f608e-7ee2-41cc-9cc8-3fa631fcdd69 - - - - - -] [instance: 92ff85a4-5620-4dd0-8930-62b7f561edf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:22:12 compute-0 nova_compute[185480]: 2026-01-27 19:22:12.107 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:13 compute-0 nova_compute[185480]: 2026-01-27 19:22:13.018 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:14 compute-0 podman[246585]: 2026-01-27 19:22:14.352615008 +0000 UTC m=+0.122071000 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.578 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.579 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.579 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.580 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.581 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.583 185484 INFO nova.compute.manager [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Terminating instance
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.585 185484 DEBUG nova.compute.manager [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:22:14 compute-0 kernel: tapb7e20f48-5e (unregistering): left promiscuous mode
Jan 27 19:22:14 compute-0 NetworkManager[56191]: <info>  [1769541734.6508] device (tapb7e20f48-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 19:22:14 compute-0 ovn_controller[97647]: 2026-01-27T19:22:14Z|00061|binding|INFO|Releasing lport b7e20f48-5e15-4381-8111-2bbf9ae03610 from this chassis (sb_readonly=0)
Jan 27 19:22:14 compute-0 ovn_controller[97647]: 2026-01-27T19:22:14Z|00062|binding|INFO|Setting lport b7e20f48-5e15-4381-8111-2bbf9ae03610 down in Southbound
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.670 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:14 compute-0 ovn_controller[97647]: 2026-01-27T19:22:14Z|00063|binding|INFO|Removing iface tapb7e20f48-5e ovn-installed in OVS
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.675 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:14 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:14.681 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:d9:f9 192.168.0.162'], port_security=['fa:16:3e:74:d9:f9 192.168.0.162'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.162/24', 'neutron:device_id': 'b6b280bb-d859-43f3-836a-f93d00510948', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f04ec1493db14ca1adbb4b6abd1667b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a99badb-bb64-4e2f-95a8-78f317eb6676', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33ebbec8-56f4-45ac-84a6-f80dd4a7c167, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=b7e20f48-5e15-4381-8111-2bbf9ae03610) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:22:14 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:14.682 106898 INFO neutron.agent.ovn.metadata.agent [-] Port b7e20f48-5e15-4381-8111-2bbf9ae03610 in datapath 4f32262d-dee8-406b-8a5a-09e95f48c8d5 unbound from our chassis
Jan 27 19:22:14 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:14.684 106898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f32262d-dee8-406b-8a5a-09e95f48c8d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 19:22:14 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:14.685 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[20ec643c-40a6-4ebf-9ea7-f347d0282092]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:22:14 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:14.686 106898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5 namespace which is not needed anymore
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.693 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:14 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 27 19:22:14 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 44.863s CPU time.
Jan 27 19:22:14 compute-0 systemd-machined[156762]: Machine qemu-1-instance-00000001 terminated.
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.901 185484 INFO nova.virt.libvirt.driver [-] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Instance destroyed successfully.
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.902 185484 DEBUG nova.objects.instance [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lazy-loading 'resources' on Instance uuid b6b280bb-d859-43f3-836a-f93d00510948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:22:14 compute-0 neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5[238931]: [NOTICE]   (238935) : haproxy version is 2.8.14-c23fe91
Jan 27 19:22:14 compute-0 neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5[238931]: [NOTICE]   (238935) : path to executable is /usr/sbin/haproxy
Jan 27 19:22:14 compute-0 neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5[238931]: [WARNING]  (238935) : Exiting Master process...
Jan 27 19:22:14 compute-0 neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5[238931]: [ALERT]    (238935) : Current worker (238937) exited with code 143 (Terminated)
Jan 27 19:22:14 compute-0 neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5[238931]: [WARNING]  (238935) : All workers exited. Exiting... (0)
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.918 185484 DEBUG nova.virt.libvirt.vif [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T19:05:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f04ec1493db14ca1adbb4b6abd1667b1',ramdisk_id='',reservation_id='r-itl5lg4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='525193b7-cb5a-4d63-9747-3b917622bbe3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T19:05:16Z,user_data=None,user_id='6d30d46dc88a4403b3a241949384d8f7',uuid=b6b280bb-d859-43f3-836a-f93d00510948,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.919 185484 DEBUG nova.network.os_vif_util [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converting VIF {"id": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "address": "fa:16:3e:74:d9:f9", "network": {"id": "4f32262d-dee8-406b-8a5a-09e95f48c8d5", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f04ec1493db14ca1adbb4b6abd1667b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7e20f48-5e", "ovs_interfaceid": "b7e20f48-5e15-4381-8111-2bbf9ae03610", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.921 185484 DEBUG nova.network.os_vif_util [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:d9:f9,bridge_name='br-int',has_traffic_filtering=True,id=b7e20f48-5e15-4381-8111-2bbf9ae03610,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e20f48-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:22:14 compute-0 systemd[1]: libpod-94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd.scope: Deactivated successfully.
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.922 185484 DEBUG os_vif [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:d9:f9,bridge_name='br-int',has_traffic_filtering=True,id=b7e20f48-5e15-4381-8111-2bbf9ae03610,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e20f48-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 19:22:14 compute-0 podman[246636]: 2026-01-27 19:22:14.925317949 +0000 UTC m=+0.081319139 container died 94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.925 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.926 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7e20f48-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.929 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.931 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.936 185484 INFO os_vif [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:d9:f9,bridge_name='br-int',has_traffic_filtering=True,id=b7e20f48-5e15-4381-8111-2bbf9ae03610,network=Network(4f32262d-dee8-406b-8a5a-09e95f48c8d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7e20f48-5e')
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.937 185484 INFO nova.virt.libvirt.driver [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Deleting instance files /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948_del
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.938 185484 INFO nova.virt.libvirt.driver [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Deletion of /var/lib/nova/instances/b6b280bb-d859-43f3-836a-f93d00510948_del complete
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.989 185484 INFO nova.compute.manager [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.990 185484 DEBUG oslo.service.loopingcall [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.990 185484 DEBUG nova.compute.manager [-] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:22:14 compute-0 nova_compute[185480]: 2026-01-27 19:22:14.990 185484 DEBUG nova.network.neutron [-] [instance: b6b280bb-d859-43f3-836a-f93d00510948] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:22:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd-userdata-shm.mount: Deactivated successfully.
Jan 27 19:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a23d5e1cd4ce04d9e99883c89e44a7aeeb78c92d34b779e9856e86c91a9c6e2-merged.mount: Deactivated successfully.
Jan 27 19:22:15 compute-0 podman[246636]: 2026-01-27 19:22:15.021932356 +0000 UTC m=+0.177933546 container cleanup 94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 27 19:22:15 compute-0 systemd[1]: libpod-conmon-94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd.scope: Deactivated successfully.
Jan 27 19:22:15 compute-0 podman[246680]: 2026-01-27 19:22:15.134594109 +0000 UTC m=+0.076255248 container remove 94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.149 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb42a63-7a7e-421e-8d3e-0c7f695fd0e5]: (4, ('Tue Jan 27 07:22:14 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5 (94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd)\n94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd\nTue Jan 27 07:22:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5 (94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd)\n94343d466c6939eb747e11afa572c28e2dd25ce41dfd653b0183a19c7b935bfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.152 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5fbaea-2263-4a4c-ae0c-2ad896590a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.154 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f32262d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.170 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:15 compute-0 kernel: tap4f32262d-d0: left promiscuous mode
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.200 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.205 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[8f860e4d-4875-40a7-ac0d-835ab57c8807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.224 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[df185e83-fbf2-480b-a292-df7fe1ece33e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.226 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[71aea59a-86ee-45a7-acf3-9564f36bf955]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.253 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[cf19e3fc-79ae-43f4-a707-33a92277b35f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383541, 'reachable_time': 18608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246693, 'error': None, 'target': 'ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:22:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d4f32262d\x2ddee8\x2d406b\x2d8a5a\x2d09e95f48c8d5.mount: Deactivated successfully.
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.273 107353 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f32262d-dee8-406b-8a5a-09e95f48c8d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 19:22:15 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:15.275 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f3893b-5d80-4927-94ee-4f3c972c98e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.508 185484 DEBUG nova.compute.manager [req-470e7dd4-10f3-4d00-b806-c95899b98164 req-436c58e7-8584-4645-bb7c-cbb0c86b3993 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received event network-vif-unplugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.509 185484 DEBUG oslo_concurrency.lockutils [req-470e7dd4-10f3-4d00-b806-c95899b98164 req-436c58e7-8584-4645-bb7c-cbb0c86b3993 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.510 185484 DEBUG oslo_concurrency.lockutils [req-470e7dd4-10f3-4d00-b806-c95899b98164 req-436c58e7-8584-4645-bb7c-cbb0c86b3993 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.510 185484 DEBUG oslo_concurrency.lockutils [req-470e7dd4-10f3-4d00-b806-c95899b98164 req-436c58e7-8584-4645-bb7c-cbb0c86b3993 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.510 185484 DEBUG nova.compute.manager [req-470e7dd4-10f3-4d00-b806-c95899b98164 req-436c58e7-8584-4645-bb7c-cbb0c86b3993 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] No waiting events found dispatching network-vif-unplugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.511 185484 DEBUG nova.compute.manager [req-470e7dd4-10f3-4d00-b806-c95899b98164 req-436c58e7-8584-4645-bb7c-cbb0c86b3993 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received event network-vif-unplugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 19:22:15 compute-0 nova_compute[185480]: 2026-01-27 19:22:15.998 185484 DEBUG nova.network.neutron [-] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.020 185484 INFO nova.compute.manager [-] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Took 1.03 seconds to deallocate network for instance.
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.077 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.078 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.109 185484 DEBUG nova.compute.manager [req-d40c217f-93a8-46ed-a85e-b48c6662fcd2 req-ef114cb9-0537-441d-a8bd-d0d2f250321d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received event network-vif-deleted-b7e20f48-5e15-4381-8111-2bbf9ae03610 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.151 185484 DEBUG nova.compute.provider_tree [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.167 185484 DEBUG nova.scheduler.client.report [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.190 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.223 185484 INFO nova.scheduler.client.report [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Deleted allocations for instance b6b280bb-d859-43f3-836a-f93d00510948
Jan 27 19:22:16 compute-0 nova_compute[185480]: 2026-01-27 19:22:16.278 185484 DEBUG oslo_concurrency.lockutils [None req-1ad6dad3-4430-489a-a85a-0eea4fa97049 6d30d46dc88a4403b3a241949384d8f7 f04ec1493db14ca1adbb4b6abd1667b1 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:16 compute-0 podman[246695]: 2026-01-27 19:22:16.306845108 +0000 UTC m=+0.078242285 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 19:22:16 compute-0 podman[246697]: 2026-01-27 19:22:16.341621345 +0000 UTC m=+0.098093463 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 19:22:16 compute-0 podman[246696]: 2026-01-27 19:22:16.353767947 +0000 UTC m=+0.126565298 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 27 19:22:17 compute-0 nova_compute[185480]: 2026-01-27 19:22:17.593 185484 DEBUG nova.compute.manager [req-83e682dd-10cc-4ae7-829e-7d1d97370fa7 req-4f5cd4f1-ad32-4f11-a55e-1e23e69880f8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received event network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:22:17 compute-0 nova_compute[185480]: 2026-01-27 19:22:17.594 185484 DEBUG oslo_concurrency.lockutils [req-83e682dd-10cc-4ae7-829e-7d1d97370fa7 req-4f5cd4f1-ad32-4f11-a55e-1e23e69880f8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "b6b280bb-d859-43f3-836a-f93d00510948-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:17 compute-0 nova_compute[185480]: 2026-01-27 19:22:17.595 185484 DEBUG oslo_concurrency.lockutils [req-83e682dd-10cc-4ae7-829e-7d1d97370fa7 req-4f5cd4f1-ad32-4f11-a55e-1e23e69880f8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:17 compute-0 nova_compute[185480]: 2026-01-27 19:22:17.595 185484 DEBUG oslo_concurrency.lockutils [req-83e682dd-10cc-4ae7-829e-7d1d97370fa7 req-4f5cd4f1-ad32-4f11-a55e-1e23e69880f8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "b6b280bb-d859-43f3-836a-f93d00510948-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:17 compute-0 nova_compute[185480]: 2026-01-27 19:22:17.595 185484 DEBUG nova.compute.manager [req-83e682dd-10cc-4ae7-829e-7d1d97370fa7 req-4f5cd4f1-ad32-4f11-a55e-1e23e69880f8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] No waiting events found dispatching network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:22:17 compute-0 nova_compute[185480]: 2026-01-27 19:22:17.596 185484 WARNING nova.compute.manager [req-83e682dd-10cc-4ae7-829e-7d1d97370fa7 req-4f5cd4f1-ad32-4f11-a55e-1e23e69880f8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Received unexpected event network-vif-plugged-b7e20f48-5e15-4381-8111-2bbf9ae03610 for instance with vm_state deleted and task_state None.
Jan 27 19:22:18 compute-0 nova_compute[185480]: 2026-01-27 19:22:18.020 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:19 compute-0 nova_compute[185480]: 2026-01-27 19:22:19.930 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:20.531 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:20.531 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:22:20.531 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:23 compute-0 nova_compute[185480]: 2026-01-27 19:22:23.022 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:24 compute-0 nova_compute[185480]: 2026-01-27 19:22:24.935 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:28 compute-0 nova_compute[185480]: 2026-01-27 19:22:28.026 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:29 compute-0 podman[246762]: 2026-01-27 19:22:29.285125556 +0000 UTC m=+0.062830044 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:22:29 compute-0 podman[246763]: 2026-01-27 19:22:29.310039436 +0000 UTC m=+0.079270020 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 19:22:29 compute-0 podman[201378]: time="2026-01-27T19:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:22:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:22:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 27 19:22:29 compute-0 nova_compute[185480]: 2026-01-27 19:22:29.898 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769541734.8961818, b6b280bb-d859-43f3-836a-f93d00510948 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:22:29 compute-0 nova_compute[185480]: 2026-01-27 19:22:29.898 185484 INFO nova.compute.manager [-] [instance: b6b280bb-d859-43f3-836a-f93d00510948] VM Stopped (Lifecycle Event)
Jan 27 19:22:29 compute-0 nova_compute[185480]: 2026-01-27 19:22:29.918 185484 DEBUG nova.compute.manager [None req-d7ff2c2e-3987-456c-8418-fc50ae956852 - - - - - -] [instance: b6b280bb-d859-43f3-836a-f93d00510948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:22:29 compute-0 nova_compute[185480]: 2026-01-27 19:22:29.939 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:31 compute-0 openstack_network_exporter[204477]: ERROR   19:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:22:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:22:31 compute-0 openstack_network_exporter[204477]: ERROR   19:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:22:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.101 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.102 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.120 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.120 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:22:32.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:22:32 compute-0 podman[246806]: 2026-01-27 19:22:32.367622744 +0000 UTC m=+0.131483587 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30)
Jan 27 19:22:33 compute-0 nova_compute[185480]: 2026-01-27 19:22:33.029 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:34 compute-0 nova_compute[185480]: 2026-01-27 19:22:34.943 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:38 compute-0 nova_compute[185480]: 2026-01-27 19:22:38.031 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:38 compute-0 podman[246826]: 2026-01-27 19:22:38.319363697 +0000 UTC m=+0.094534688 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter)
Jan 27 19:22:39 compute-0 nova_compute[185480]: 2026-01-27 19:22:39.948 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:43 compute-0 nova_compute[185480]: 2026-01-27 19:22:43.034 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:44 compute-0 podman[246846]: 2026-01-27 19:22:44.761444207 +0000 UTC m=+0.083924422 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260126)
Jan 27 19:22:44 compute-0 nova_compute[185480]: 2026-01-27 19:22:44.953 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:45 compute-0 ovn_controller[97647]: 2026-01-27T19:22:45Z|00064|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Jan 27 19:22:47 compute-0 podman[246867]: 2026-01-27 19:22:47.334303213 +0000 UTC m=+0.096057253 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 19:22:47 compute-0 podman[246865]: 2026-01-27 19:22:47.37399935 +0000 UTC m=+0.135561796 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:22:47 compute-0 podman[246866]: 2026-01-27 19:22:47.382835422 +0000 UTC m=+0.136935578 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 19:22:47 compute-0 nova_compute[185480]: 2026-01-27 19:22:47.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:47 compute-0 nova_compute[185480]: 2026-01-27 19:22:47.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:22:47 compute-0 nova_compute[185480]: 2026-01-27 19:22:47.540 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:22:48 compute-0 nova_compute[185480]: 2026-01-27 19:22:48.036 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:49 compute-0 nova_compute[185480]: 2026-01-27 19:22:49.957 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:53 compute-0 nova_compute[185480]: 2026-01-27 19:22:53.040 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:53 compute-0 nova_compute[185480]: 2026-01-27 19:22:53.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:54 compute-0 nova_compute[185480]: 2026-01-27 19:22:54.962 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.576 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.576 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.577 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.577 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.961 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.964 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5345MB free_disk=72.41444396972656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.964 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:22:56 compute-0 nova_compute[185480]: 2026-01-27 19:22:56.965 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:22:57 compute-0 nova_compute[185480]: 2026-01-27 19:22:57.037 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:22:57 compute-0 nova_compute[185480]: 2026-01-27 19:22:57.038 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:22:57 compute-0 nova_compute[185480]: 2026-01-27 19:22:57.065 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:22:57 compute-0 nova_compute[185480]: 2026-01-27 19:22:57.085 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:22:57 compute-0 nova_compute[185480]: 2026-01-27 19:22:57.113 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:22:57 compute-0 nova_compute[185480]: 2026-01-27 19:22:57.114 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:22:58 compute-0 nova_compute[185480]: 2026-01-27 19:22:58.043 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:22:58 compute-0 nova_compute[185480]: 2026-01-27 19:22:58.108 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:58 compute-0 nova_compute[185480]: 2026-01-27 19:22:58.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:22:58 compute-0 nova_compute[185480]: 2026-01-27 19:22:58.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:22:59 compute-0 podman[201378]: time="2026-01-27T19:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:22:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:22:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 27 19:22:59 compute-0 nova_compute[185480]: 2026-01-27 19:22:59.964 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:00 compute-0 podman[246933]: 2026-01-27 19:23:00.355571315 +0000 UTC m=+0.119632682 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:23:00 compute-0 podman[246934]: 2026-01-27 19:23:00.362326608 +0000 UTC m=+0.122167833 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 19:23:00 compute-0 nova_compute[185480]: 2026-01-27 19:23:00.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:01 compute-0 openstack_network_exporter[204477]: ERROR   19:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:23:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:23:01 compute-0 openstack_network_exporter[204477]: ERROR   19:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:23:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:23:01 compute-0 nova_compute[185480]: 2026-01-27 19:23:01.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:01 compute-0 nova_compute[185480]: 2026-01-27 19:23:01.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:02 compute-0 nova_compute[185480]: 2026-01-27 19:23:02.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:03 compute-0 nova_compute[185480]: 2026-01-27 19:23:03.045 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:03 compute-0 podman[246976]: 2026-01-27 19:23:03.330020192 +0000 UTC m=+0.100661085 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=base rhel9, container_name=kepler, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:23:04 compute-0 nova_compute[185480]: 2026-01-27 19:23:04.968 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:08 compute-0 nova_compute[185480]: 2026-01-27 19:23:08.050 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:09 compute-0 podman[246997]: 2026-01-27 19:23:09.319304619 +0000 UTC m=+0.088495542 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Jan 27 19:23:09 compute-0 nova_compute[185480]: 2026-01-27 19:23:09.972 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:13 compute-0 nova_compute[185480]: 2026-01-27 19:23:13.050 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:14 compute-0 nova_compute[185480]: 2026-01-27 19:23:14.977 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:15 compute-0 podman[247018]: 2026-01-27 19:23:15.317727189 +0000 UTC m=+0.095518561 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 19:23:18 compute-0 nova_compute[185480]: 2026-01-27 19:23:18.054 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:18 compute-0 podman[247037]: 2026-01-27 19:23:18.296638841 +0000 UTC m=+0.065586840 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:23:18 compute-0 podman[247039]: 2026-01-27 19:23:18.356288008 +0000 UTC m=+0.109250642 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 19:23:18 compute-0 podman[247038]: 2026-01-27 19:23:18.385708826 +0000 UTC m=+0.145274889 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 27 19:23:19 compute-0 nova_compute[185480]: 2026-01-27 19:23:19.982 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:23:20.532 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:23:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:23:20.532 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:23:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:23:20.533 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:23:23 compute-0 nova_compute[185480]: 2026-01-27 19:23:23.056 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:24 compute-0 nova_compute[185480]: 2026-01-27 19:23:24.985 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:28 compute-0 nova_compute[185480]: 2026-01-27 19:23:28.059 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:29 compute-0 sshd-session[247103]: Invalid user sol from 45.148.10.240 port 54252
Jan 27 19:23:29 compute-0 sshd-session[247103]: Connection closed by invalid user sol 45.148.10.240 port 54252 [preauth]
Jan 27 19:23:29 compute-0 podman[201378]: time="2026-01-27T19:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:23:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:23:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 27 19:23:29 compute-0 nova_compute[185480]: 2026-01-27 19:23:29.990 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:31 compute-0 podman[247105]: 2026-01-27 19:23:31.359293379 +0000 UTC m=+0.124772495 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:23:31 compute-0 podman[247106]: 2026-01-27 19:23:31.380016108 +0000 UTC m=+0.137924831 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 19:23:31 compute-0 openstack_network_exporter[204477]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:23:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:23:31 compute-0 openstack_network_exporter[204477]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:23:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:23:33 compute-0 nova_compute[185480]: 2026-01-27 19:23:33.061 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:34 compute-0 podman[247149]: 2026-01-27 19:23:34.326250016 +0000 UTC m=+0.100801498 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=)
Jan 27 19:23:34 compute-0 nova_compute[185480]: 2026-01-27 19:23:34.993 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:38 compute-0 nova_compute[185480]: 2026-01-27 19:23:38.066 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:39 compute-0 nova_compute[185480]: 2026-01-27 19:23:39.998 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:40 compute-0 podman[247167]: 2026-01-27 19:23:40.37432148 +0000 UTC m=+0.136342654 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Jan 27 19:23:43 compute-0 nova_compute[185480]: 2026-01-27 19:23:43.066 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:45 compute-0 nova_compute[185480]: 2026-01-27 19:23:45.006 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:46 compute-0 podman[247187]: 2026-01-27 19:23:46.321643326 +0000 UTC m=+0.104886486 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Jan 27 19:23:48 compute-0 nova_compute[185480]: 2026-01-27 19:23:48.068 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:48 compute-0 nova_compute[185480]: 2026-01-27 19:23:48.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:48 compute-0 nova_compute[185480]: 2026-01-27 19:23:48.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:23:48 compute-0 nova_compute[185480]: 2026-01-27 19:23:48.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:23:48 compute-0 nova_compute[185480]: 2026-01-27 19:23:48.577 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:23:49 compute-0 podman[247207]: 2026-01-27 19:23:49.308123722 +0000 UTC m=+0.079799243 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:23:49 compute-0 podman[247212]: 2026-01-27 19:23:49.343436012 +0000 UTC m=+0.086490424 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 27 19:23:49 compute-0 podman[247208]: 2026-01-27 19:23:49.392068453 +0000 UTC m=+0.144709006 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 19:23:50 compute-0 nova_compute[185480]: 2026-01-27 19:23:50.010 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:53 compute-0 nova_compute[185480]: 2026-01-27 19:23:53.071 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:54 compute-0 nova_compute[185480]: 2026-01-27 19:23:54.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:55 compute-0 nova_compute[185480]: 2026-01-27 19:23:55.014 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.553 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.553 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.553 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.553 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.980 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.982 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5357MB free_disk=72.41444396972656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.982 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:23:56 compute-0 nova_compute[185480]: 2026-01-27 19:23:56.983 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:23:57 compute-0 nova_compute[185480]: 2026-01-27 19:23:57.058 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:23:57 compute-0 nova_compute[185480]: 2026-01-27 19:23:57.059 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:23:57 compute-0 nova_compute[185480]: 2026-01-27 19:23:57.111 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:23:57 compute-0 nova_compute[185480]: 2026-01-27 19:23:57.129 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:23:57 compute-0 nova_compute[185480]: 2026-01-27 19:23:57.131 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:23:57 compute-0 nova_compute[185480]: 2026-01-27 19:23:57.132 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:23:58 compute-0 nova_compute[185480]: 2026-01-27 19:23:58.075 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:23:58 compute-0 nova_compute[185480]: 2026-01-27 19:23:58.127 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:58 compute-0 nova_compute[185480]: 2026-01-27 19:23:58.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:23:58 compute-0 nova_compute[185480]: 2026-01-27 19:23:58.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:23:59 compute-0 podman[201378]: time="2026-01-27T19:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:23:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:23:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3909 "" "Go-http-client/1.1"
Jan 27 19:24:00 compute-0 nova_compute[185480]: 2026-01-27 19:24:00.018 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:00 compute-0 nova_compute[185480]: 2026-01-27 19:24:00.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:01 compute-0 openstack_network_exporter[204477]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:24:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:24:01 compute-0 openstack_network_exporter[204477]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:24:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:24:01 compute-0 nova_compute[185480]: 2026-01-27 19:24:01.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:02 compute-0 podman[247272]: 2026-01-27 19:24:02.301981404 +0000 UTC m=+0.076535994 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:24:02 compute-0 podman[247273]: 2026-01-27 19:24:02.335972083 +0000 UTC m=+0.105357658 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:24:02 compute-0 nova_compute[185480]: 2026-01-27 19:24:02.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:03 compute-0 nova_compute[185480]: 2026-01-27 19:24:03.078 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:04 compute-0 nova_compute[185480]: 2026-01-27 19:24:04.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:05 compute-0 nova_compute[185480]: 2026-01-27 19:24:05.022 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:05 compute-0 podman[247312]: 2026-01-27 19:24:05.338972118 +0000 UTC m=+0.101055344 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, config_id=kepler, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Jan 27 19:24:08 compute-0 nova_compute[185480]: 2026-01-27 19:24:08.084 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:10 compute-0 nova_compute[185480]: 2026-01-27 19:24:10.026 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:10 compute-0 nova_compute[185480]: 2026-01-27 19:24:10.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:11 compute-0 podman[247331]: 2026-01-27 19:24:11.355173402 +0000 UTC m=+0.121904567 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git)
Jan 27 19:24:13 compute-0 nova_compute[185480]: 2026-01-27 19:24:13.084 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:15 compute-0 nova_compute[185480]: 2026-01-27 19:24:15.030 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:17 compute-0 podman[247351]: 2026-01-27 19:24:17.301162076 +0000 UTC m=+0.084350842 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, io.buildah.version=1.41.4, org.label-schema.build-date=20260126)
Jan 27 19:24:18 compute-0 nova_compute[185480]: 2026-01-27 19:24:18.087 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:20 compute-0 nova_compute[185480]: 2026-01-27 19:24:20.034 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:20 compute-0 podman[247371]: 2026-01-27 19:24:20.308004493 +0000 UTC m=+0.081935943 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:24:20 compute-0 podman[247373]: 2026-01-27 19:24:20.31529689 +0000 UTC m=+0.073460061 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 19:24:20 compute-0 podman[247372]: 2026-01-27 19:24:20.375190071 +0000 UTC m=+0.140718259 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 19:24:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:24:20.532 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:24:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:24:20.533 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:24:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:24:20.533 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:24:23 compute-0 nova_compute[185480]: 2026-01-27 19:24:23.092 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:25 compute-0 nova_compute[185480]: 2026-01-27 19:24:25.038 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:28 compute-0 nova_compute[185480]: 2026-01-27 19:24:28.096 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:29 compute-0 podman[201378]: time="2026-01-27T19:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:24:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:24:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3916 "" "Go-http-client/1.1"
Jan 27 19:24:30 compute-0 nova_compute[185480]: 2026-01-27 19:24:30.043 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:31 compute-0 openstack_network_exporter[204477]: ERROR   19:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:24:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:24:31 compute-0 openstack_network_exporter[204477]: ERROR   19:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:24:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.103 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.107 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'disk.device.read.latency': [], 'network.incoming.packets.error': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'network.outgoing.bytes': [], 'disk.device.write.latency': [], 'network.outgoing.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.123 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'disk.device.read.latency': [], 'network.incoming.packets.error': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'network.outgoing.bytes': [], 'disk.device.write.latency': [], 'network.outgoing.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'disk.device.read.latency': [], 'network.incoming.packets.error': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'network.outgoing.bytes': [], 'disk.device.write.latency': [], 'network.outgoing.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'disk.device.read.latency': [], 'network.incoming.packets.error': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'network.outgoing.bytes': [], 'disk.device.write.latency': [], 'network.outgoing.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'disk.device.read.latency': [], 'network.incoming.packets.error': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'network.outgoing.bytes': [], 'disk.device.write.latency': [], 'network.outgoing.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.125 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'disk.device.read.latency': [], 'network.incoming.packets.error': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'network.outgoing.bytes': [], 'disk.device.write.latency': [], 'network.outgoing.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.125 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'disk.device.read.latency': [], 'network.incoming.packets.error': [], 'network.outgoing.packets': [], 'disk.device.usage': [], 'network.outgoing.bytes': [], 'disk.device.write.latency': [], 'network.outgoing.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:24:32.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:24:33 compute-0 nova_compute[185480]: 2026-01-27 19:24:33.100 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:33 compute-0 podman[247436]: 2026-01-27 19:24:33.325347972 +0000 UTC m=+0.092287112 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:24:33 compute-0 podman[247437]: 2026-01-27 19:24:33.363481501 +0000 UTC m=+0.126610190 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 19:24:35 compute-0 nova_compute[185480]: 2026-01-27 19:24:35.047 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:36 compute-0 podman[247479]: 2026-01-27 19:24:36.324026241 +0000 UTC m=+0.104871456 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 19:24:38 compute-0 nova_compute[185480]: 2026-01-27 19:24:38.104 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:40 compute-0 nova_compute[185480]: 2026-01-27 19:24:40.051 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:42 compute-0 podman[247498]: 2026-01-27 19:24:42.313560726 +0000 UTC m=+0.090780967 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.openshift.expose-services=)
Jan 27 19:24:43 compute-0 nova_compute[185480]: 2026-01-27 19:24:43.106 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:45 compute-0 nova_compute[185480]: 2026-01-27 19:24:45.056 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:48 compute-0 nova_compute[185480]: 2026-01-27 19:24:48.110 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:48 compute-0 podman[247520]: 2026-01-27 19:24:48.345714742 +0000 UTC m=+0.113589146 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:24:48 compute-0 nova_compute[185480]: 2026-01-27 19:24:48.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:48 compute-0 nova_compute[185480]: 2026-01-27 19:24:48.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:24:48 compute-0 nova_compute[185480]: 2026-01-27 19:24:48.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:24:48 compute-0 nova_compute[185480]: 2026-01-27 19:24:48.536 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:24:50 compute-0 nova_compute[185480]: 2026-01-27 19:24:50.060 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:51 compute-0 podman[247542]: 2026-01-27 19:24:51.312119917 +0000 UTC m=+0.074018114 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 19:24:51 compute-0 podman[247540]: 2026-01-27 19:24:51.357040598 +0000 UTC m=+0.120189886 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:24:51 compute-0 podman[247541]: 2026-01-27 19:24:51.372526947 +0000 UTC m=+0.136718051 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 19:24:53 compute-0 nova_compute[185480]: 2026-01-27 19:24:53.112 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:55 compute-0 nova_compute[185480]: 2026-01-27 19:24:55.064 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.556 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.556 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.557 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.558 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.946 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.948 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5367MB free_disk=72.41444396972656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.948 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:24:56 compute-0 nova_compute[185480]: 2026-01-27 19:24:56.949 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:24:57 compute-0 nova_compute[185480]: 2026-01-27 19:24:57.178 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:24:57 compute-0 nova_compute[185480]: 2026-01-27 19:24:57.179 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:24:57 compute-0 nova_compute[185480]: 2026-01-27 19:24:57.208 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:24:57 compute-0 nova_compute[185480]: 2026-01-27 19:24:57.257 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:24:57 compute-0 nova_compute[185480]: 2026-01-27 19:24:57.260 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:24:57 compute-0 nova_compute[185480]: 2026-01-27 19:24:57.261 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:24:58 compute-0 nova_compute[185480]: 2026-01-27 19:24:58.115 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:24:58 compute-0 nova_compute[185480]: 2026-01-27 19:24:58.257 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:24:59 compute-0 podman[201378]: time="2026-01-27T19:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:24:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:24:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3908 "" "Go-http-client/1.1"
Jan 27 19:25:00 compute-0 nova_compute[185480]: 2026-01-27 19:25:00.067 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:00 compute-0 nova_compute[185480]: 2026-01-27 19:25:00.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:00 compute-0 nova_compute[185480]: 2026-01-27 19:25:00.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:25:01 compute-0 openstack_network_exporter[204477]: ERROR   19:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:25:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:25:01 compute-0 openstack_network_exporter[204477]: ERROR   19:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:25:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:25:01 compute-0 nova_compute[185480]: 2026-01-27 19:25:01.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:02 compute-0 nova_compute[185480]: 2026-01-27 19:25:02.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:03 compute-0 nova_compute[185480]: 2026-01-27 19:25:03.117 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:03 compute-0 nova_compute[185480]: 2026-01-27 19:25:03.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:04 compute-0 podman[247604]: 2026-01-27 19:25:04.314237382 +0000 UTC m=+0.090080158 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:25:04 compute-0 podman[247605]: 2026-01-27 19:25:04.348245374 +0000 UTC m=+0.112154268 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:25:05 compute-0 nova_compute[185480]: 2026-01-27 19:25:05.071 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:05 compute-0 nova_compute[185480]: 2026-01-27 19:25:05.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:07 compute-0 podman[247647]: 2026-01-27 19:25:07.369579856 +0000 UTC m=+0.129407101 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, release=1214.1726694543, config_id=kepler, container_name=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64)
Jan 27 19:25:08 compute-0 nova_compute[185480]: 2026-01-27 19:25:08.121 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:10 compute-0 nova_compute[185480]: 2026-01-27 19:25:10.076 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:13 compute-0 nova_compute[185480]: 2026-01-27 19:25:13.123 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:13 compute-0 podman[247666]: 2026-01-27 19:25:13.367064031 +0000 UTC m=+0.134065976 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:25:15 compute-0 nova_compute[185480]: 2026-01-27 19:25:15.081 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:18 compute-0 nova_compute[185480]: 2026-01-27 19:25:18.126 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:19 compute-0 podman[247687]: 2026-01-27 19:25:19.365122041 +0000 UTC m=+0.127930305 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 27 19:25:20 compute-0 nova_compute[185480]: 2026-01-27 19:25:20.089 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:25:20.536 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:25:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:25:20.536 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:25:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:25:20.536 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:25:22 compute-0 podman[247706]: 2026-01-27 19:25:22.306042882 +0000 UTC m=+0.074656970 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:25:22 compute-0 podman[247708]: 2026-01-27 19:25:22.316051947 +0000 UTC m=+0.078598687 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 19:25:22 compute-0 podman[247707]: 2026-01-27 19:25:22.412600942 +0000 UTC m=+0.170201600 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 19:25:23 compute-0 nova_compute[185480]: 2026-01-27 19:25:23.138 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:25 compute-0 nova_compute[185480]: 2026-01-27 19:25:25.094 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:28 compute-0 nova_compute[185480]: 2026-01-27 19:25:28.143 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:29 compute-0 podman[201378]: time="2026-01-27T19:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:25:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:25:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3906 "" "Go-http-client/1.1"
Jan 27 19:25:30 compute-0 nova_compute[185480]: 2026-01-27 19:25:30.101 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:31 compute-0 openstack_network_exporter[204477]: ERROR   19:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:25:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:25:31 compute-0 openstack_network_exporter[204477]: ERROR   19:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:25:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:25:33 compute-0 nova_compute[185480]: 2026-01-27 19:25:33.150 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:35 compute-0 nova_compute[185480]: 2026-01-27 19:25:35.105 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:35 compute-0 podman[247773]: 2026-01-27 19:25:35.357215232 +0000 UTC m=+0.124078792 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:25:35 compute-0 podman[247774]: 2026-01-27 19:25:35.395319525 +0000 UTC m=+0.153856680 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 19:25:37 compute-0 nova_compute[185480]: 2026-01-27 19:25:37.209 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:38 compute-0 nova_compute[185480]: 2026-01-27 19:25:38.152 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:38 compute-0 podman[247813]: 2026-01-27 19:25:38.384463748 +0000 UTC m=+0.151103473 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, distribution-scope=public, release-0.7.12=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, name=ubi9, config_id=kepler, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler)
Jan 27 19:25:40 compute-0 nova_compute[185480]: 2026-01-27 19:25:40.110 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:43 compute-0 nova_compute[185480]: 2026-01-27 19:25:43.153 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:44 compute-0 podman[247833]: 2026-01-27 19:25:44.332522663 +0000 UTC m=+0.099021638 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=openstack_network_exporter, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 19:25:45 compute-0 nova_compute[185480]: 2026-01-27 19:25:45.114 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:46 compute-0 sshd-session[247853]: Invalid user sol from 45.148.10.240 port 52568
Jan 27 19:25:46 compute-0 sshd-session[247853]: Connection closed by invalid user sol 45.148.10.240 port 52568 [preauth]
Jan 27 19:25:48 compute-0 nova_compute[185480]: 2026-01-27 19:25:48.154 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:48 compute-0 nova_compute[185480]: 2026-01-27 19:25:48.519 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:48 compute-0 nova_compute[185480]: 2026-01-27 19:25:48.519 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:25:48 compute-0 nova_compute[185480]: 2026-01-27 19:25:48.519 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:25:48 compute-0 nova_compute[185480]: 2026-01-27 19:25:48.544 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:25:50 compute-0 nova_compute[185480]: 2026-01-27 19:25:50.118 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:50 compute-0 podman[247856]: 2026-01-27 19:25:50.334095829 +0000 UTC m=+0.103341933 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 19:25:52 compute-0 nova_compute[185480]: 2026-01-27 19:25:52.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:52 compute-0 nova_compute[185480]: 2026-01-27 19:25:52.517 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 19:25:52 compute-0 nova_compute[185480]: 2026-01-27 19:25:52.814 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 19:25:53 compute-0 nova_compute[185480]: 2026-01-27 19:25:53.156 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:53 compute-0 podman[247875]: 2026-01-27 19:25:53.351243707 +0000 UTC m=+0.115443189 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:25:53 compute-0 podman[247877]: 2026-01-27 19:25:53.364455411 +0000 UTC m=+0.115856750 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:25:53 compute-0 podman[247876]: 2026-01-27 19:25:53.40321452 +0000 UTC m=+0.158791941 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 19:25:55 compute-0 nova_compute[185480]: 2026-01-27 19:25:55.122 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:57 compute-0 nova_compute[185480]: 2026-01-27 19:25:57.813 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:58 compute-0 nova_compute[185480]: 2026-01-27 19:25:58.159 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:25:58 compute-0 nova_compute[185480]: 2026-01-27 19:25:58.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:58 compute-0 nova_compute[185480]: 2026-01-27 19:25:58.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:25:59 compute-0 nova_compute[185480]: 2026-01-27 19:25:59.195 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:25:59 compute-0 nova_compute[185480]: 2026-01-27 19:25:59.197 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:25:59 compute-0 nova_compute[185480]: 2026-01-27 19:25:59.197 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:25:59 compute-0 nova_compute[185480]: 2026-01-27 19:25:59.197 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:25:59 compute-0 nova_compute[185480]: 2026-01-27 19:25:59.741 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:25:59 compute-0 nova_compute[185480]: 2026-01-27 19:25:59.742 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5347MB free_disk=72.4144401550293GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:25:59 compute-0 nova_compute[185480]: 2026-01-27 19:25:59.743 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:25:59 compute-0 nova_compute[185480]: 2026-01-27 19:25:59.743 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:25:59 compute-0 podman[201378]: time="2026-01-27T19:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:25:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:25:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3901 "" "Go-http-client/1.1"
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.126 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.404 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.405 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.522 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing inventories for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.660 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating ProviderTree inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.661 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.696 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing aggregate associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.742 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing trait associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, traits: HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.774 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.958 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.959 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:26:00 compute-0 rsyslogd[235877]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 19:26:00 compute-0 nova_compute[185480]: 2026-01-27 19:26:00.959 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:26:00 compute-0 rsyslogd[235877]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 19:26:01 compute-0 openstack_network_exporter[204477]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:26:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:26:01 compute-0 openstack_network_exporter[204477]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:26:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:26:01 compute-0 nova_compute[185480]: 2026-01-27 19:26:01.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:02 compute-0 nova_compute[185480]: 2026-01-27 19:26:02.711 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:02 compute-0 nova_compute[185480]: 2026-01-27 19:26:02.712 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:02 compute-0 nova_compute[185480]: 2026-01-27 19:26:02.713 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:26:03 compute-0 nova_compute[185480]: 2026-01-27 19:26:03.161 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:03 compute-0 nova_compute[185480]: 2026-01-27 19:26:03.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:05 compute-0 nova_compute[185480]: 2026-01-27 19:26:05.130 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:05 compute-0 nova_compute[185480]: 2026-01-27 19:26:05.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:06 compute-0 podman[247945]: 2026-01-27 19:26:06.350219695 +0000 UTC m=+0.120854441 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:26:06 compute-0 podman[247946]: 2026-01-27 19:26:06.386603407 +0000 UTC m=+0.139551390 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 27 19:26:06 compute-0 nova_compute[185480]: 2026-01-27 19:26:06.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:07 compute-0 nova_compute[185480]: 2026-01-27 19:26:07.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:07 compute-0 nova_compute[185480]: 2026-01-27 19:26:07.517 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 19:26:08 compute-0 nova_compute[185480]: 2026-01-27 19:26:08.164 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:09 compute-0 podman[247985]: 2026-01-27 19:26:09.37291532 +0000 UTC m=+0.143611250 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, version=9.4, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30)
Jan 27 19:26:10 compute-0 nova_compute[185480]: 2026-01-27 19:26:10.136 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:10 compute-0 nova_compute[185480]: 2026-01-27 19:26:10.578 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:13 compute-0 nova_compute[185480]: 2026-01-27 19:26:13.171 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:14 compute-0 podman[248004]: 2026-01-27 19:26:14.848065339 +0000 UTC m=+0.149238837 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64)
Jan 27 19:26:15 compute-0 nova_compute[185480]: 2026-01-27 19:26:15.141 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:18 compute-0 nova_compute[185480]: 2026-01-27 19:26:18.172 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:20 compute-0 nova_compute[185480]: 2026-01-27 19:26:20.143 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:26:20.536 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:26:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:26:20.537 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:26:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:26:20.537 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:26:21 compute-0 podman[248025]: 2026-01-27 19:26:21.379582489 +0000 UTC m=+0.151114003 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true)
Jan 27 19:26:23 compute-0 nova_compute[185480]: 2026-01-27 19:26:23.174 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:24 compute-0 nova_compute[185480]: 2026-01-27 19:26:24.173 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:24 compute-0 podman[248046]: 2026-01-27 19:26:24.310579078 +0000 UTC m=+0.067811653 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:26:24 compute-0 podman[248044]: 2026-01-27 19:26:24.311145592 +0000 UTC m=+0.072589280 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:26:24 compute-0 podman[248045]: 2026-01-27 19:26:24.413691584 +0000 UTC m=+0.167044003 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:26:25 compute-0 nova_compute[185480]: 2026-01-27 19:26:25.147 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:28 compute-0 nova_compute[185480]: 2026-01-27 19:26:28.177 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:29 compute-0 podman[201378]: time="2026-01-27T19:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:26:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:26:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 27 19:26:30 compute-0 nova_compute[185480]: 2026-01-27 19:26:30.153 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:31 compute-0 openstack_network_exporter[204477]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:26:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:26:31 compute-0 openstack_network_exporter[204477]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:26:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.102 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.103 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'memory.usage': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.read.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:26:32.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:26:33 compute-0 nova_compute[185480]: 2026-01-27 19:26:33.179 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:35 compute-0 nova_compute[185480]: 2026-01-27 19:26:35.160 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:37 compute-0 podman[248108]: 2026-01-27 19:26:37.329561886 +0000 UTC m=+0.100070162 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:26:37 compute-0 podman[248109]: 2026-01-27 19:26:37.368469719 +0000 UTC m=+0.124272895 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 19:26:38 compute-0 nova_compute[185480]: 2026-01-27 19:26:38.183 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:38 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:26:38.792 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:26:38 compute-0 nova_compute[185480]: 2026-01-27 19:26:38.793 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:38 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:26:38.794 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:26:40 compute-0 nova_compute[185480]: 2026-01-27 19:26:40.164 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:40 compute-0 podman[248151]: 2026-01-27 19:26:40.31999385 +0000 UTC m=+0.099654673 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vendor=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, container_name=kepler)
Jan 27 19:26:41 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:26:41.797 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:26:43 compute-0 nova_compute[185480]: 2026-01-27 19:26:43.186 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:45 compute-0 nova_compute[185480]: 2026-01-27 19:26:45.169 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:45 compute-0 podman[248172]: 2026-01-27 19:26:45.326336733 +0000 UTC m=+0.085805993 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41)
Jan 27 19:26:48 compute-0 nova_compute[185480]: 2026-01-27 19:26:48.191 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:49 compute-0 nova_compute[185480]: 2026-01-27 19:26:49.671 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:49 compute-0 nova_compute[185480]: 2026-01-27 19:26:49.672 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:26:49 compute-0 nova_compute[185480]: 2026-01-27 19:26:49.672 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:26:49 compute-0 nova_compute[185480]: 2026-01-27 19:26:49.722 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:26:50 compute-0 nova_compute[185480]: 2026-01-27 19:26:50.173 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:52 compute-0 podman[248191]: 2026-01-27 19:26:52.321308067 +0000 UTC m=+0.105854424 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:26:53 compute-0 nova_compute[185480]: 2026-01-27 19:26:53.193 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:55 compute-0 nova_compute[185480]: 2026-01-27 19:26:55.178 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:55 compute-0 podman[248213]: 2026-01-27 19:26:55.334818407 +0000 UTC m=+0.088451259 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 19:26:55 compute-0 podman[248211]: 2026-01-27 19:26:55.335461193 +0000 UTC m=+0.094405134 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:26:55 compute-0 podman[248212]: 2026-01-27 19:26:55.372018528 +0000 UTC m=+0.130762974 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 19:26:57 compute-0 nova_compute[185480]: 2026-01-27 19:26:57.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:58 compute-0 nova_compute[185480]: 2026-01-27 19:26:58.197 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:26:59 compute-0 nova_compute[185480]: 2026-01-27 19:26:59.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:26:59 compute-0 podman[201378]: time="2026-01-27T19:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:26:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:26:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3909 "" "Go-http-client/1.1"
Jan 27 19:27:00 compute-0 nova_compute[185480]: 2026-01-27 19:27:00.182 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:00 compute-0 nova_compute[185480]: 2026-01-27 19:27:00.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:27:00 compute-0 nova_compute[185480]: 2026-01-27 19:27:00.601 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:00 compute-0 nova_compute[185480]: 2026-01-27 19:27:00.601 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:00 compute-0 nova_compute[185480]: 2026-01-27 19:27:00.602 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:00 compute-0 nova_compute[185480]: 2026-01-27 19:27:00.602 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.101 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.103 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5370MB free_disk=72.4144401550293GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.103 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.104 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.181 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.181 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.226 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.291 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.293 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:27:01 compute-0 nova_compute[185480]: 2026-01-27 19:27:01.294 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:01 compute-0 openstack_network_exporter[204477]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:27:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:27:01 compute-0 openstack_network_exporter[204477]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:27:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:27:03 compute-0 nova_compute[185480]: 2026-01-27 19:27:03.200 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:04 compute-0 nova_compute[185480]: 2026-01-27 19:27:04.295 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:27:04 compute-0 nova_compute[185480]: 2026-01-27 19:27:04.296 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:27:04 compute-0 nova_compute[185480]: 2026-01-27 19:27:04.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:27:05 compute-0 nova_compute[185480]: 2026-01-27 19:27:05.186 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:05 compute-0 nova_compute[185480]: 2026-01-27 19:27:05.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:27:06 compute-0 nova_compute[185480]: 2026-01-27 19:27:06.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:27:07 compute-0 nova_compute[185480]: 2026-01-27 19:27:07.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:27:08 compute-0 nova_compute[185480]: 2026-01-27 19:27:08.203 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:08 compute-0 podman[248275]: 2026-01-27 19:27:08.319480744 +0000 UTC m=+0.095899540 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 19:27:08 compute-0 podman[248274]: 2026-01-27 19:27:08.338979021 +0000 UTC m=+0.102032840 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:27:08 compute-0 ovn_controller[97647]: 2026-01-27T19:27:08Z|00065|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 27 19:27:10 compute-0 nova_compute[185480]: 2026-01-27 19:27:10.191 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:11 compute-0 podman[248313]: 2026-01-27 19:27:11.362197111 +0000 UTC m=+0.136460096 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, config_id=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, version=9.4, name=ubi9, distribution-scope=public)
Jan 27 19:27:13 compute-0 nova_compute[185480]: 2026-01-27 19:27:13.206 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:15 compute-0 nova_compute[185480]: 2026-01-27 19:27:15.195 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:16 compute-0 podman[248332]: 2026-01-27 19:27:16.36719568 +0000 UTC m=+0.129171396 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:27:18 compute-0 nova_compute[185480]: 2026-01-27 19:27:18.208 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:20 compute-0 nova_compute[185480]: 2026-01-27 19:27:20.200 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:20.538 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:20.539 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:20.539 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:22.385 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:27:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:22.386 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:27:22 compute-0 nova_compute[185480]: 2026-01-27 19:27:22.392 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:23 compute-0 nova_compute[185480]: 2026-01-27 19:27:23.210 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:23 compute-0 podman[248353]: 2026-01-27 19:27:23.364753775 +0000 UTC m=+0.137117860 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 19:27:25 compute-0 nova_compute[185480]: 2026-01-27 19:27:25.204 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:26 compute-0 podman[248373]: 2026-01-27 19:27:26.320825717 +0000 UTC m=+0.090211370 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:27:26 compute-0 podman[248375]: 2026-01-27 19:27:26.357858495 +0000 UTC m=+0.114545127 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:27:26 compute-0 podman[248374]: 2026-01-27 19:27:26.375604279 +0000 UTC m=+0.145491405 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 27 19:27:28 compute-0 nova_compute[185480]: 2026-01-27 19:27:28.058 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:28 compute-0 nova_compute[185480]: 2026-01-27 19:27:28.213 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:29 compute-0 podman[201378]: time="2026-01-27T19:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:27:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 27 19:27:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Jan 27 19:27:29 compute-0 nova_compute[185480]: 2026-01-27 19:27:29.829 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:30 compute-0 nova_compute[185480]: 2026-01-27 19:27:30.209 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:30 compute-0 nova_compute[185480]: 2026-01-27 19:27:30.746 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:30 compute-0 nova_compute[185480]: 2026-01-27 19:27:30.836 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:31 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:31.388 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:27:31 compute-0 openstack_network_exporter[204477]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:27:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:27:31 compute-0 openstack_network_exporter[204477]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:27:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:27:33 compute-0 nova_compute[185480]: 2026-01-27 19:27:33.216 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:35 compute-0 nova_compute[185480]: 2026-01-27 19:27:35.201 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:35 compute-0 nova_compute[185480]: 2026-01-27 19:27:35.212 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:38 compute-0 nova_compute[185480]: 2026-01-27 19:27:38.219 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:38 compute-0 nova_compute[185480]: 2026-01-27 19:27:38.576 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:39 compute-0 podman[248439]: 2026-01-27 19:27:39.32730737 +0000 UTC m=+0.106607393 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:27:39 compute-0 podman[248440]: 2026-01-27 19:27:39.336848073 +0000 UTC m=+0.099572840 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 19:27:40 compute-0 nova_compute[185480]: 2026-01-27 19:27:40.099 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:40 compute-0 nova_compute[185480]: 2026-01-27 19:27:40.214 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:41 compute-0 nova_compute[185480]: 2026-01-27 19:27:41.524 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:41 compute-0 nova_compute[185480]: 2026-01-27 19:27:41.842 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:42 compute-0 podman[248481]: 2026-01-27 19:27:42.364289075 +0000 UTC m=+0.137144971 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.225 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.495 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.496 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.527 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.655 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.655 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.668 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.668 185484 INFO nova.compute.claims [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.832 185484 DEBUG nova.compute.provider_tree [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.909 185484 DEBUG nova.scheduler.client.report [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.996 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:43 compute-0 nova_compute[185480]: 2026-01-27 19:27:43.997 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.058 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.058 185484 DEBUG nova.network.neutron [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.083 185484 INFO nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.100 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.199 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.201 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.201 185484 INFO nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Creating image(s)
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.203 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "/var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.203 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "/var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.204 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "/var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.205 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "69104a6fcf619df4c492b27a202c23b5821c0e32" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.206 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:44 compute-0 nova_compute[185480]: 2026-01-27 19:27:44.707 185484 DEBUG nova.policy [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc82377b5f6f4792aeae1f181b8cf53b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3600960457704cf7bab0031c88d94f9d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 19:27:45 compute-0 nova_compute[185480]: 2026-01-27 19:27:45.219 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:46 compute-0 nova_compute[185480]: 2026-01-27 19:27:46.657 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:46 compute-0 nova_compute[185480]: 2026-01-27 19:27:46.730 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32.part --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:46 compute-0 nova_compute[185480]: 2026-01-27 19:27:46.732 185484 DEBUG nova.virt.images [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] 729797c6-2677-44bd-a4a8-949d1f57b0a2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 19:27:46 compute-0 nova_compute[185480]: 2026-01-27 19:27:46.737 185484 DEBUG nova.privsep.utils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 19:27:46 compute-0 nova_compute[185480]: 2026-01-27 19:27:46.738 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32.part /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:46 compute-0 nova_compute[185480]: 2026-01-27 19:27:46.826 185484 DEBUG nova.network.neutron [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Successfully created port: f9c5e783-e45a-4c95-ac23-9dd75009c0e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.203 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32.part /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32.converted" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.212 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.295 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.297 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.317 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:47 compute-0 podman[248508]: 2026-01-27 19:27:47.366100367 +0000 UTC m=+0.134720241 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.419 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.421 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "69104a6fcf619df4c492b27a202c23b5821c0e32" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.422 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.437 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.505 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.506 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.570 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk 1073741824" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.572 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.573 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.654 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.656 185484 DEBUG nova.virt.disk.api [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Checking if we can resize image /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.657 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.719 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.720 185484 DEBUG nova.virt.disk.api [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Cannot resize image /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.720 185484 DEBUG nova.objects.instance [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lazy-loading 'migration_context' on Instance uuid 87df2e63-6d90-4f9b-9c89-e3156bc11b8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.744 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.745 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Ensure instance console log exists: /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.745 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.746 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:47 compute-0 nova_compute[185480]: 2026-01-27 19:27:47.746 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:48 compute-0 nova_compute[185480]: 2026-01-27 19:27:48.228 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:48 compute-0 nova_compute[185480]: 2026-01-27 19:27:48.449 185484 DEBUG nova.network.neutron [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Successfully updated port: f9c5e783-e45a-4c95-ac23-9dd75009c0e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:27:48 compute-0 nova_compute[185480]: 2026-01-27 19:27:48.473 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "refresh_cache-87df2e63-6d90-4f9b-9c89-e3156bc11b8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:27:48 compute-0 nova_compute[185480]: 2026-01-27 19:27:48.473 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquired lock "refresh_cache-87df2e63-6d90-4f9b-9c89-e3156bc11b8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:27:48 compute-0 nova_compute[185480]: 2026-01-27 19:27:48.474 185484 DEBUG nova.network.neutron [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:27:48 compute-0 nova_compute[185480]: 2026-01-27 19:27:48.794 185484 DEBUG nova.network.neutron [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:27:49 compute-0 nova_compute[185480]: 2026-01-27 19:27:49.869 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:50 compute-0 nova_compute[185480]: 2026-01-27 19:27:50.222 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:50 compute-0 nova_compute[185480]: 2026-01-27 19:27:50.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:27:50 compute-0 nova_compute[185480]: 2026-01-27 19:27:50.517 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:27:50 compute-0 nova_compute[185480]: 2026-01-27 19:27:50.517 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:27:50 compute-0 nova_compute[185480]: 2026-01-27 19:27:50.567 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 19:27:50 compute-0 nova_compute[185480]: 2026-01-27 19:27:50.567 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.473 185484 DEBUG nova.network.neutron [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Updating instance_info_cache with network_info: [{"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.506 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Releasing lock "refresh_cache-87df2e63-6d90-4f9b-9c89-e3156bc11b8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.507 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Instance network_info: |[{"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.512 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Start _get_guest_xml network_info=[{"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.523 185484 WARNING nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.533 185484 DEBUG nova.virt.libvirt.host [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.535 185484 DEBUG nova.virt.libvirt.host [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.540 185484 DEBUG nova.virt.libvirt.host [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.541 185484 DEBUG nova.virt.libvirt.host [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.543 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.544 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:26:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='49f81b8c-e0df-4a53-87c6-69576be59651',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.545 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.546 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.547 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.548 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.549 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.550 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.551 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.552 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.552 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.553 185484 DEBUG nova.virt.hardware [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.563 185484 DEBUG nova.virt.libvirt.vif [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-358082491',display_name='tempest-ServerAddressesTestJSON-server-358082491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-358082491',id=6,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3600960457704cf7bab0031c88d94f9d',ramdisk_id='',reservation_id='r-fj1kutnx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-757206843',owner_user_name='tempest-ServerAddressesTestJSON-757206843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:27:44Z,user_data=None,user_id='dc82377b5f6f4792aeae1f181b8cf53b',uuid=87df2e63-6d90-4f9b-9c89-e3156bc11b8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.564 185484 DEBUG nova.network.os_vif_util [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Converting VIF {"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.566 185484 DEBUG nova.network.os_vif_util [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3c:75,bridge_name='br-int',has_traffic_filtering=True,id=f9c5e783-e45a-4c95-ac23-9dd75009c0e8,network=Network(0c11641b-5524-434f-b21f-ef4387b27e56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9c5e783-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.569 185484 DEBUG nova.objects.instance [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lazy-loading 'pci_devices' on Instance uuid 87df2e63-6d90-4f9b-9c89-e3156bc11b8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.589 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <uuid>87df2e63-6d90-4f9b-9c89-e3156bc11b8b</uuid>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <name>instance-00000006</name>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <memory>131072</memory>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <nova:name>tempest-ServerAddressesTestJSON-server-358082491</nova:name>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:27:51</nova:creationTime>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <nova:flavor name="m1.nano">
Jan 27 19:27:51 compute-0 nova_compute[185480]:         <nova:memory>128</nova:memory>
Jan 27 19:27:51 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:27:51 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:27:51 compute-0 nova_compute[185480]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 19:27:51 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:27:51 compute-0 nova_compute[185480]:         <nova:user uuid="dc82377b5f6f4792aeae1f181b8cf53b">tempest-ServerAddressesTestJSON-757206843-project-member</nova:user>
Jan 27 19:27:51 compute-0 nova_compute[185480]:         <nova:project uuid="3600960457704cf7bab0031c88d94f9d">tempest-ServerAddressesTestJSON-757206843</nova:project>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="729797c6-2677-44bd-a4a8-949d1f57b0a2"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:27:51 compute-0 nova_compute[185480]:         <nova:port uuid="f9c5e783-e45a-4c95-ac23-9dd75009c0e8">
Jan 27 19:27:51 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <system>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <entry name="serial">87df2e63-6d90-4f9b-9c89-e3156bc11b8b</entry>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <entry name="uuid">87df2e63-6d90-4f9b-9c89-e3156bc11b8b</entry>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </system>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <os>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   </os>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <features>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   </features>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk.config"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:1f:3c:75"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <target dev="tapf9c5e783-e4"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/console.log" append="off"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <video>
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </video>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:27:51 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:27:51 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:27:51 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:27:51 compute-0 nova_compute[185480]: </domain>
Jan 27 19:27:51 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.590 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Preparing to wait for external event network-vif-plugged-f9c5e783-e45a-4c95-ac23-9dd75009c0e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.590 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.591 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.592 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.593 185484 DEBUG nova.virt.libvirt.vif [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-358082491',display_name='tempest-ServerAddressesTestJSON-server-358082491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-358082491',id=6,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3600960457704cf7bab0031c88d94f9d',ramdisk_id='',reservation_id='r-fj1kutnx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-757206843',owner_user_name='tempest-ServerAddressesTestJSON-757206843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:27:44Z,user_data=None,user_id='dc82377b5f6f4792aeae1f181b8cf53b',uuid=87df2e63-6d90-4f9b-9c89-e3156bc11b8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.594 185484 DEBUG nova.network.os_vif_util [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Converting VIF {"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.596 185484 DEBUG nova.network.os_vif_util [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3c:75,bridge_name='br-int',has_traffic_filtering=True,id=f9c5e783-e45a-4c95-ac23-9dd75009c0e8,network=Network(0c11641b-5524-434f-b21f-ef4387b27e56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9c5e783-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.596 185484 DEBUG os_vif [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3c:75,bridge_name='br-int',has_traffic_filtering=True,id=f9c5e783-e45a-4c95-ac23-9dd75009c0e8,network=Network(0c11641b-5524-434f-b21f-ef4387b27e56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9c5e783-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.598 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.598 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.599 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.603 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.604 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9c5e783-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.604 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf9c5e783-e4, col_values=(('external_ids', {'iface-id': 'f9c5e783-e45a-4c95-ac23-9dd75009c0e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:3c:75', 'vm-uuid': '87df2e63-6d90-4f9b-9c89-e3156bc11b8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.607 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.608 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:27:51 compute-0 NetworkManager[56191]: <info>  [1769542071.6096] manager: (tapf9c5e783-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.620 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.621 185484 INFO os_vif [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3c:75,bridge_name='br-int',has_traffic_filtering=True,id=f9c5e783-e45a-4c95-ac23-9dd75009c0e8,network=Network(0c11641b-5524-434f-b21f-ef4387b27e56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9c5e783-e4')
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.825 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.826 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.826 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] No VIF found with MAC fa:16:3e:1f:3c:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.827 185484 INFO nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Using config drive
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.846 185484 DEBUG nova.compute.manager [req-5b9ce4cf-c365-4418-a9e7-a3b2cb058614 req-305bce11-212b-4c7e-b29d-2344d837b93f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Received event network-changed-f9c5e783-e45a-4c95-ac23-9dd75009c0e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.847 185484 DEBUG nova.compute.manager [req-5b9ce4cf-c365-4418-a9e7-a3b2cb058614 req-305bce11-212b-4c7e-b29d-2344d837b93f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Refreshing instance network info cache due to event network-changed-f9c5e783-e45a-4c95-ac23-9dd75009c0e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.848 185484 DEBUG oslo_concurrency.lockutils [req-5b9ce4cf-c365-4418-a9e7-a3b2cb058614 req-305bce11-212b-4c7e-b29d-2344d837b93f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-87df2e63-6d90-4f9b-9c89-e3156bc11b8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.849 185484 DEBUG oslo_concurrency.lockutils [req-5b9ce4cf-c365-4418-a9e7-a3b2cb058614 req-305bce11-212b-4c7e-b29d-2344d837b93f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-87df2e63-6d90-4f9b-9c89-e3156bc11b8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:27:51 compute-0 nova_compute[185480]: 2026-01-27 19:27:51.850 185484 DEBUG nova.network.neutron [req-5b9ce4cf-c365-4418-a9e7-a3b2cb058614 req-305bce11-212b-4c7e-b29d-2344d837b93f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Refreshing network info cache for port f9c5e783-e45a-4c95-ac23-9dd75009c0e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.029 185484 INFO nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Creating config drive at /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk.config
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.034 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4fnyl4g7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.161 185484 DEBUG oslo_concurrency.processutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4fnyl4g7" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.231 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:53 compute-0 kernel: tapf9c5e783-e4: entered promiscuous mode
Jan 27 19:27:53 compute-0 NetworkManager[56191]: <info>  [1769542073.2478] manager: (tapf9c5e783-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.247 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:53 compute-0 ovn_controller[97647]: 2026-01-27T19:27:53Z|00066|binding|INFO|Claiming lport f9c5e783-e45a-4c95-ac23-9dd75009c0e8 for this chassis.
Jan 27 19:27:53 compute-0 ovn_controller[97647]: 2026-01-27T19:27:53Z|00067|binding|INFO|f9c5e783-e45a-4c95-ac23-9dd75009c0e8: Claiming fa:16:3e:1f:3c:75 10.100.0.6
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.263 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:3c:75 10.100.0.6'], port_security=['fa:16:3e:1f:3c:75 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '87df2e63-6d90-4f9b-9c89-e3156bc11b8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c11641b-5524-434f-b21f-ef4387b27e56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3600960457704cf7bab0031c88d94f9d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7f0ec1ae-1b56-407d-b130-c9856f04cabc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b019432-4716-4db3-a0f5-056a1de5deec, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=f9c5e783-e45a-4c95-ac23-9dd75009c0e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.267 106898 INFO neutron.agent.ovn.metadata.agent [-] Port f9c5e783-e45a-4c95-ac23-9dd75009c0e8 in datapath 0c11641b-5524-434f-b21f-ef4387b27e56 bound to our chassis
Jan 27 19:27:53 compute-0 ovn_controller[97647]: 2026-01-27T19:27:53Z|00068|binding|INFO|Setting lport f9c5e783-e45a-4c95-ac23-9dd75009c0e8 ovn-installed in OVS
Jan 27 19:27:53 compute-0 ovn_controller[97647]: 2026-01-27T19:27:53Z|00069|binding|INFO|Setting lport f9c5e783-e45a-4c95-ac23-9dd75009c0e8 up in Southbound
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.268 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.270 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c11641b-5524-434f-b21f-ef4387b27e56
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.275 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.287 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[638a2d31-a21f-459e-a545-da0c014169c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.288 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0c11641b-51 in ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.290 238834 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0c11641b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.290 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[d29dcb9c-4280-451b-8a54-0bcc4c29c365]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.291 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[b694abfa-95e2-4b03-b712-790d10036d69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 systemd-udevd[248566]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:27:53 compute-0 systemd-machined[156762]: New machine qemu-6-instance-00000006.
Jan 27 19:27:53 compute-0 NetworkManager[56191]: <info>  [1769542073.3104] device (tapf9c5e783-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.309 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f5e490-236e-44f6-93e5-4f48028915a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 NetworkManager[56191]: <info>  [1769542073.3114] device (tapf9c5e783-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:27:53 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.339 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[f397d591-4469-4c51-81a2-3c7558184687]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.378 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbf81c8-a2d8-428e-8fbd-35edbd3db101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.384 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f25e3b-3f93-4ed6-8729-5f51d729f368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 NetworkManager[56191]: <info>  [1769542073.3853] manager: (tap0c11641b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.423 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1e27e1-a73b-4d38-a521-1f52494a21ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.428 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[d042f5c3-253b-4609-8d7b-96962b82e0ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 NetworkManager[56191]: <info>  [1769542073.4571] device (tap0c11641b-50): carrier: link connected
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.464 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[51e15480-01e5-434e-bd5c-b13e062e65d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.484 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[c885c432-de9a-435d-913b-f6d4bf90054a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c11641b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:c6:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519002, 'reachable_time': 33214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248600, 'error': None, 'target': 'ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.501 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[3eef1ae2-74bf-4523-ab84-f73187b91cc4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:c6f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519002, 'tstamp': 519002}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248602, 'error': None, 'target': 'ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.520 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4ba906-384a-47b1-83d0-29abf0eaa083]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c11641b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:c6:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519002, 'reachable_time': 33214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248603, 'error': None, 'target': 'ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.562 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[7997d026-705f-4070-864a-cbfabda7abf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.611 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "be961482-e05a-4655-96ea-7d4810738a3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.612 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.630 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.631 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[9972a6cf-9a72-4139-9d4f-3c3a92f00ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.633 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c11641b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.634 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.634 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c11641b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.636 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:53 compute-0 kernel: tap0c11641b-50: entered promiscuous mode
Jan 27 19:27:53 compute-0 NetworkManager[56191]: <info>  [1769542073.6388] manager: (tap0c11641b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.641 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c11641b-50, col_values=(('external_ids', {'iface-id': '9e71aaa0-a32c-44e1-b9c4-07a1dc80196d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:27:53 compute-0 ovn_controller[97647]: 2026-01-27T19:27:53Z|00070|binding|INFO|Releasing lport 9e71aaa0-a32c-44e1-b9c4-07a1dc80196d from this chassis (sb_readonly=0)
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.641 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.662 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.662 106898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0c11641b-5524-434f-b21f-ef4387b27e56.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0c11641b-5524-434f-b21f-ef4387b27e56.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.663 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[93a72d32-8bc3-46e0-930a-6de7a1bcda4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.664 106898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: global
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     log         /dev/log local0 debug
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     log-tag     haproxy-metadata-proxy-0c11641b-5524-434f-b21f-ef4387b27e56
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     user        root
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     group       root
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     maxconn     1024
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     pidfile     /var/lib/neutron/external/pids/0c11641b-5524-434f-b21f-ef4387b27e56.pid.haproxy
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     daemon
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: defaults
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     log global
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     mode http
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     option httplog
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     option dontlognull
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     option http-server-close
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     option forwardfor
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     retries                 3
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     timeout http-request    30s
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     timeout connect         30s
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     timeout client          32s
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     timeout server          32s
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     timeout http-keep-alive 30s
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: listen listener
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     bind 169.254.169.254:80
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:     http-request add-header X-OVN-Network-ID 0c11641b-5524-434f-b21f-ef4387b27e56
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 19:27:53 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:27:53.666 106898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56', 'env', 'PROCESS_TAG=haproxy-0c11641b-5524-434f-b21f-ef4387b27e56', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0c11641b-5524-434f-b21f-ef4387b27e56.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.714 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.715 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.727 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.728 185484 INFO nova.compute.claims [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.907 185484 DEBUG nova.compute.provider_tree [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.931 185484 DEBUG nova.scheduler.client.report [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.962 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:53 compute-0 nova_compute[185480]: 2026-01-27 19:27:53.963 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.036 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.037 185484 DEBUG nova.network.neutron [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.069 185484 INFO nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.096 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.206 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.209 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.210 185484 INFO nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Creating image(s)
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.211 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "/var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.212 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "/var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.213 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "/var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:54 compute-0 podman[248635]: 2026-01-27 19:27:54.220353152 +0000 UTC m=+0.099222581 container create d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.230 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:54 compute-0 podman[248635]: 2026-01-27 19:27:54.16063602 +0000 UTC m=+0.039505459 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 19:27:54 compute-0 systemd[1]: Started libpod-conmon-d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9.scope.
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.315 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.316 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "69104a6fcf619df4c492b27a202c23b5821c0e32" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.317 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:54 compute-0 systemd[1]: Started libcrun container.
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.330 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38f1dea9aba8f1c0a8ce01a277487ad117a1f1fba3f1a71e0ffeb838223027f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.384 185484 DEBUG nova.policy [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87274dd877fa476fa885a5665ba052ec', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4102701402ff4f059dd67182960e5b64', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 19:27:54 compute-0 podman[248635]: 2026-01-27 19:27:54.389301902 +0000 UTC m=+0.268171391 container init d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:27:54 compute-0 podman[248635]: 2026-01-27 19:27:54.398889887 +0000 UTC m=+0.277759326 container start d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.399 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.400 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:54 compute-0 neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56[248651]: [NOTICE]   (248660) : New worker (248663) forked
Jan 27 19:27:54 compute-0 neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56[248651]: [NOTICE]   (248660) : Loading success.
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.466 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk 1073741824" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.468 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.469 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.537 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.538 185484 DEBUG nova.virt.disk.api [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Checking if we can resize image /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.539 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.600 185484 DEBUG nova.network.neutron [req-5b9ce4cf-c365-4418-a9e7-a3b2cb058614 req-305bce11-212b-4c7e-b29d-2344d837b93f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Updated VIF entry in instance network info cache for port f9c5e783-e45a-4c95-ac23-9dd75009c0e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.601 185484 DEBUG nova.network.neutron [req-5b9ce4cf-c365-4418-a9e7-a3b2cb058614 req-305bce11-212b-4c7e-b29d-2344d837b93f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Updating instance_info_cache with network_info: [{"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.603 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.603 185484 DEBUG nova.virt.disk.api [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Cannot resize image /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.603 185484 DEBUG nova.objects.instance [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lazy-loading 'migration_context' on Instance uuid be961482-e05a-4655-96ea-7d4810738a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.622 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.623 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Ensure instance console log exists: /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.623 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.623 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.624 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.625 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542074.6246371, 87df2e63-6d90-4f9b-9c89-e3156bc11b8b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.626 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] VM Started (Lifecycle Event)
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.628 185484 DEBUG oslo_concurrency.lockutils [req-5b9ce4cf-c365-4418-a9e7-a3b2cb058614 req-305bce11-212b-4c7e-b29d-2344d837b93f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-87df2e63-6d90-4f9b-9c89-e3156bc11b8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.651 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.656 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542074.6248205, 87df2e63-6d90-4f9b-9c89-e3156bc11b8b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.657 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] VM Paused (Lifecycle Event)
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.679 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.684 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:27:54 compute-0 nova_compute[185480]: 2026-01-27 19:27:54.702 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:27:54 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 19:27:54 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 19:27:54 compute-0 podman[248687]: 2026-01-27 19:27:54.835603306 +0000 UTC m=+0.094231350 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:27:55 compute-0 nova_compute[185480]: 2026-01-27 19:27:55.798 185484 DEBUG nova.network.neutron [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Successfully created port: b0ee253c-f6f7-42b0-906e-b575d0104fbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 19:27:56 compute-0 nova_compute[185480]: 2026-01-27 19:27:56.608 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:57 compute-0 podman[248725]: 2026-01-27 19:27:57.356789885 +0000 UTC m=+0.124699177 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:27:57 compute-0 podman[248727]: 2026-01-27 19:27:57.367244881 +0000 UTC m=+0.127094905 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:27:57 compute-0 podman[248726]: 2026-01-27 19:27:57.398882016 +0000 UTC m=+0.161926938 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 19:27:58 compute-0 nova_compute[185480]: 2026-01-27 19:27:58.234 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:27:58 compute-0 nova_compute[185480]: 2026-01-27 19:27:58.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:27:58 compute-0 nova_compute[185480]: 2026-01-27 19:27:58.981 185484 DEBUG nova.network.neutron [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Successfully updated port: b0ee253c-f6f7-42b0-906e-b575d0104fbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:27:59 compute-0 nova_compute[185480]: 2026-01-27 19:27:59.001 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:27:59 compute-0 nova_compute[185480]: 2026-01-27 19:27:59.001 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquired lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:27:59 compute-0 nova_compute[185480]: 2026-01-27 19:27:59.002 185484 DEBUG nova.network.neutron [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:27:59 compute-0 nova_compute[185480]: 2026-01-27 19:27:59.367 185484 DEBUG nova.network.neutron [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:27:59 compute-0 podman[201378]: time="2026-01-27T19:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:27:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:27:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 27 19:27:59 compute-0 nova_compute[185480]: 2026-01-27 19:27:59.878 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "22a0bada-8656-451c-af9e-743901138320" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:27:59 compute-0 nova_compute[185480]: 2026-01-27 19:27:59.879 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "22a0bada-8656-451c-af9e-743901138320" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:27:59 compute-0 nova_compute[185480]: 2026-01-27 19:27:59.906 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.000 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.001 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.011 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.012 185484 INFO nova.compute.claims [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.168 185484 DEBUG nova.compute.provider_tree [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.197 185484 DEBUG nova.scheduler.client.report [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.234 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.236 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.297 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.298 185484 DEBUG nova.network.neutron [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.325 185484 INFO nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.342 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.442 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.444 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.444 185484 INFO nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Creating image(s)
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.445 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "/var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.446 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "/var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.447 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "/var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.465 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.526 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.528 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "69104a6fcf619df4c492b27a202c23b5821c0e32" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.529 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.543 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.614 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.617 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.677 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk 1073741824" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.679 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.680 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.772 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.773 185484 DEBUG nova.virt.disk.api [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Checking if we can resize image /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.774 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.818 185484 DEBUG nova.compute.manager [req-2920ade3-e59d-41e2-bbf8-0d59698851a6 req-acaf16cd-a92d-4988-abbe-73f4b0a9a82d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-changed-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.818 185484 DEBUG nova.compute.manager [req-2920ade3-e59d-41e2-bbf8-0d59698851a6 req-acaf16cd-a92d-4988-abbe-73f4b0a9a82d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Refreshing instance network info cache due to event network-changed-b0ee253c-f6f7-42b0-906e-b575d0104fbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.819 185484 DEBUG oslo_concurrency.lockutils [req-2920ade3-e59d-41e2-bbf8-0d59698851a6 req-acaf16cd-a92d-4988-abbe-73f4b0a9a82d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.848 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.849 185484 DEBUG nova.virt.disk.api [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Cannot resize image /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.850 185484 DEBUG nova.objects.instance [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lazy-loading 'migration_context' on Instance uuid 22a0bada-8656-451c-af9e-743901138320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.867 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.868 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Ensure instance console log exists: /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.869 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.869 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.869 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.921 185484 DEBUG nova.policy [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'edcee212fadb4bbf9267ca664aca5ec7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '73cc7e581727408e9ec8407b9dbf203f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.927 185484 DEBUG nova.network.neutron [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.962 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Releasing lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.962 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Instance network_info: |[{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.963 185484 DEBUG oslo_concurrency.lockutils [req-2920ade3-e59d-41e2-bbf8-0d59698851a6 req-acaf16cd-a92d-4988-abbe-73f4b0a9a82d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.963 185484 DEBUG nova.network.neutron [req-2920ade3-e59d-41e2-bbf8-0d59698851a6 req-acaf16cd-a92d-4988-abbe-73f4b0a9a82d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Refreshing network info cache for port b0ee253c-f6f7-42b0-906e-b575d0104fbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.967 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Start _get_guest_xml network_info=[{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.977 185484 WARNING nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.987 185484 DEBUG nova.virt.libvirt.host [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:28:00 compute-0 nova_compute[185480]: 2026-01-27 19:28:00.988 185484 DEBUG nova.virt.libvirt.host [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.001 185484 DEBUG nova.virt.libvirt.host [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.003 185484 DEBUG nova.virt.libvirt.host [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.003 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.004 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:26:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='49f81b8c-e0df-4a53-87c6-69576be59651',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.005 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.005 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.005 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.005 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.005 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.005 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.006 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.006 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.006 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.006 185484 DEBUG nova.virt.hardware [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.010 185484 DEBUG nova.virt.libvirt.vif [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1779555975',display_name='tempest-AttachInterfacesUnderV243Test-server-1779555975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1779555975',id=7,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyqdFH2eG14J/b7rtsScThh8sRg5o4HAKOX+Lemc285oMyiai6RHuVK35i3ylocgEr0tzDFqxutYlO+GlSDEWhiQGDeQuYFNXRaqskh88JGgZZcqOJFgm88hLGt14KInw==',key_name='tempest-keypair-1143519939',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4102701402ff4f059dd67182960e5b64',ramdisk_id='',reservation_id='r-yvft0q2g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-66798845',owner_user_name='tempest-AttachInterfacesUnderV243Test-66798845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:27:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87274dd877fa476fa885a5665ba052ec',uuid=be961482-e05a-4655-96ea-7d4810738a3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.011 185484 DEBUG nova.network.os_vif_util [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Converting VIF {"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.012 185484 DEBUG nova.network.os_vif_util [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:60:a3,bridge_name='br-int',has_traffic_filtering=True,id=b0ee253c-f6f7-42b0-906e-b575d0104fbb,network=Network(400ee5e7-0154-41c9-b068-b9b4ba0c2fdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0ee253c-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.013 185484 DEBUG nova.objects.instance [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid be961482-e05a-4655-96ea-7d4810738a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.039 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <uuid>be961482-e05a-4655-96ea-7d4810738a3c</uuid>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <name>instance-00000007</name>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <memory>131072</memory>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1779555975</nova:name>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:28:00</nova:creationTime>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <nova:flavor name="m1.nano">
Jan 27 19:28:01 compute-0 nova_compute[185480]:         <nova:memory>128</nova:memory>
Jan 27 19:28:01 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:28:01 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:28:01 compute-0 nova_compute[185480]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 19:28:01 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:28:01 compute-0 nova_compute[185480]:         <nova:user uuid="87274dd877fa476fa885a5665ba052ec">tempest-AttachInterfacesUnderV243Test-66798845-project-member</nova:user>
Jan 27 19:28:01 compute-0 nova_compute[185480]:         <nova:project uuid="4102701402ff4f059dd67182960e5b64">tempest-AttachInterfacesUnderV243Test-66798845</nova:project>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="729797c6-2677-44bd-a4a8-949d1f57b0a2"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:28:01 compute-0 nova_compute[185480]:         <nova:port uuid="b0ee253c-f6f7-42b0-906e-b575d0104fbb">
Jan 27 19:28:01 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <system>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <entry name="serial">be961482-e05a-4655-96ea-7d4810738a3c</entry>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <entry name="uuid">be961482-e05a-4655-96ea-7d4810738a3c</entry>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </system>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <os>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   </os>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <features>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   </features>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk.config"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:d8:60:a3"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <target dev="tapb0ee253c-f6"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/console.log" append="off"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <video>
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </video>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:28:01 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:28:01 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:28:01 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:28:01 compute-0 nova_compute[185480]: </domain>
Jan 27 19:28:01 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.039 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Preparing to wait for external event network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.039 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "be961482-e05a-4655-96ea-7d4810738a3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.040 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.040 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.040 185484 DEBUG nova.virt.libvirt.vif [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1779555975',display_name='tempest-AttachInterfacesUnderV243Test-server-1779555975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1779555975',id=7,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyqdFH2eG14J/b7rtsScThh8sRg5o4HAKOX+Lemc285oMyiai6RHuVK35i3ylocgEr0tzDFqxutYlO+GlSDEWhiQGDeQuYFNXRaqskh88JGgZZcqOJFgm88hLGt14KInw==',key_name='tempest-keypair-1143519939',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4102701402ff4f059dd67182960e5b64',ramdisk_id='',reservation_id='r-yvft0q2g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-66798845',owner_user_name='tempest-AttachInterfacesUnderV243Test-66798845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:27:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87274dd877fa476fa885a5665ba052ec',uuid=be961482-e05a-4655-96ea-7d4810738a3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.041 185484 DEBUG nova.network.os_vif_util [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Converting VIF {"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.041 185484 DEBUG nova.network.os_vif_util [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:60:a3,bridge_name='br-int',has_traffic_filtering=True,id=b0ee253c-f6f7-42b0-906e-b575d0104fbb,network=Network(400ee5e7-0154-41c9-b068-b9b4ba0c2fdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0ee253c-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.041 185484 DEBUG os_vif [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:60:a3,bridge_name='br-int',has_traffic_filtering=True,id=b0ee253c-f6f7-42b0-906e-b575d0104fbb,network=Network(400ee5e7-0154-41c9-b068-b9b4ba0c2fdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0ee253c-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.042 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.042 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.043 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.046 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.046 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0ee253c-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.047 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0ee253c-f6, col_values=(('external_ids', {'iface-id': 'b0ee253c-f6f7-42b0-906e-b575d0104fbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:60:a3', 'vm-uuid': 'be961482-e05a-4655-96ea-7d4810738a3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.049 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:01 compute-0 NetworkManager[56191]: <info>  [1769542081.0514] manager: (tapb0ee253c-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.053 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.057 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.058 185484 INFO os_vif [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:60:a3,bridge_name='br-int',has_traffic_filtering=True,id=b0ee253c-f6f7-42b0-906e-b575d0104fbb,network=Network(400ee5e7-0154-41c9-b068-b9b4ba0c2fdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0ee253c-f6')
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.132 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.133 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.133 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] No VIF found with MAC fa:16:3e:d8:60:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.134 185484 INFO nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Using config drive
Jan 27 19:28:01 compute-0 openstack_network_exporter[204477]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:28:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:28:01 compute-0 openstack_network_exporter[204477]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:28:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:28:01 compute-0 sshd-session[248804]: Invalid user sol from 45.148.10.240 port 33650
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.540 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.540 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.541 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.541 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:28:01 compute-0 sshd-session[248804]: Connection closed by invalid user sol 45.148.10.240 port 33650 [preauth]
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.631 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.713 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.714 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.807 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.813 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.873 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.874 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.934 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:01 compute-0 nova_compute[185480]: 2026-01-27 19:28:01.937 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Periodic task is updating the host stat, it is trying to get disk instance-00000007, but disk file was removed by concurrent operations such as resize.: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk.config'
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.145 185484 INFO nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Creating config drive at /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk.config
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.151 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpna4y4gn_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.294 185484 DEBUG oslo_concurrency.processutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpna4y4gn_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:02 compute-0 kernel: tapb0ee253c-f6: entered promiscuous mode
Jan 27 19:28:02 compute-0 NetworkManager[56191]: <info>  [1769542082.3644] manager: (tapb0ee253c-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 27 19:28:02 compute-0 ovn_controller[97647]: 2026-01-27T19:28:02Z|00071|binding|INFO|Claiming lport b0ee253c-f6f7-42b0-906e-b575d0104fbb for this chassis.
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.368 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:02 compute-0 ovn_controller[97647]: 2026-01-27T19:28:02Z|00072|binding|INFO|b0ee253c-f6f7-42b0-906e-b575d0104fbb: Claiming fa:16:3e:d8:60:a3 10.100.0.13
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.377 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:60:a3 10.100.0.13'], port_security=['fa:16:3e:d8:60:a3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'be961482-e05a-4655-96ea-7d4810738a3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4102701402ff4f059dd67182960e5b64', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eda9f509-731f-435f-adb5-63c842fd18c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa76a92b-3621-45ba-b3a2-0a9908c3ee96, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=b0ee253c-f6f7-42b0-906e-b575d0104fbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.378 106898 INFO neutron.agent.ovn.metadata.agent [-] Port b0ee253c-f6f7-42b0-906e-b575d0104fbb in datapath 400ee5e7-0154-41c9-b068-b9b4ba0c2fdc bound to our chassis
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.380 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 400ee5e7-0154-41c9-b068-b9b4ba0c2fdc
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.387 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.388 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5267MB free_disk=72.37895202636719GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.388 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.388 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:02 compute-0 ovn_controller[97647]: 2026-01-27T19:28:02Z|00073|binding|INFO|Setting lport b0ee253c-f6f7-42b0-906e-b575d0104fbb ovn-installed in OVS
Jan 27 19:28:02 compute-0 ovn_controller[97647]: 2026-01-27T19:28:02Z|00074|binding|INFO|Setting lport b0ee253c-f6f7-42b0-906e-b575d0104fbb up in Southbound
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.396 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.397 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.398 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0a11fd-fc3b-42ec-9ee5-abcc2703ee5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.399 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap400ee5e7-01 in ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.402 238834 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap400ee5e7-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.402 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6441c3-4d05-47fc-92d3-3ea2f64edfd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.404 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[7416aae3-3d21-45bf-80d8-eb497527b0fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 systemd-udevd[248838]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.418 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[62721b5d-2848-47c1-8f50-696745cf5053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 systemd-machined[156762]: New machine qemu-7-instance-00000007.
Jan 27 19:28:02 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 27 19:28:02 compute-0 NetworkManager[56191]: <info>  [1769542082.4341] device (tapb0ee253c-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:28:02 compute-0 NetworkManager[56191]: <info>  [1769542082.4348] device (tapb0ee253c-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.445 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[7b229255-f0b2-4201-adab-83b313e51df4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.481 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[1c84fa34-a1f8-4c41-bcf7-f045ff818a5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.489 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[dd203796-6c3b-4e16-889b-a8ad84e41a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 NetworkManager[56191]: <info>  [1769542082.4912] manager: (tap400ee5e7-00): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.496 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 87df2e63-6d90-4f9b-9c89-e3156bc11b8b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.496 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance be961482-e05a-4655-96ea-7d4810738a3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.497 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 22a0bada-8656-451c-af9e-743901138320 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.497 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.497 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.531 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[1d511db3-f2cc-4bc5-ba78-2f352f7cb65c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.535 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[25da9119-3d42-4643-ac36-05529a1002b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 NetworkManager[56191]: <info>  [1769542082.5616] device (tap400ee5e7-00): carrier: link connected
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.567 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[baa5f075-184d-40ca-aa73-0bc63c66325b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.587 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[f6cd7940-9b63-47d5-b0b5-e2f1b66151f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap400ee5e7-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:02:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519912, 'reachable_time': 25323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248870, 'error': None, 'target': 'ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.600 185484 DEBUG nova.network.neutron [req-2920ade3-e59d-41e2-bbf8-0d59698851a6 req-acaf16cd-a92d-4988-abbe-73f4b0a9a82d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updated VIF entry in instance network info cache for port b0ee253c-f6f7-42b0-906e-b575d0104fbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.600 185484 DEBUG nova.network.neutron [req-2920ade3-e59d-41e2-bbf8-0d59698851a6 req-acaf16cd-a92d-4988-abbe-73f4b0a9a82d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.604 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.605 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[523169f8-2468-44df-8451-7fb78882877f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:2c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519912, 'tstamp': 519912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248871, 'error': None, 'target': 'ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.621 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.621 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[757ce786-baf9-4bd3-85d5-3e218cbe45a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap400ee5e7-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:02:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519912, 'reachable_time': 25323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248872, 'error': None, 'target': 'ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.627 185484 DEBUG oslo_concurrency.lockutils [req-2920ade3-e59d-41e2-bbf8-0d59698851a6 req-acaf16cd-a92d-4988-abbe-73f4b0a9a82d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.651 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.651 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.664 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa96aaf-cccb-46b1-bd54-1ed6b95fca3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.731 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[cebddcbf-00b0-4bc3-9201-6cf91ce82615]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.734 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap400ee5e7-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.735 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.735 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap400ee5e7-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.738 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:02 compute-0 NetworkManager[56191]: <info>  [1769542082.7394] manager: (tap400ee5e7-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 27 19:28:02 compute-0 kernel: tap400ee5e7-00: entered promiscuous mode
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.743 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.746 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap400ee5e7-00, col_values=(('external_ids', {'iface-id': 'a1115b00-0e91-4461-b1a5-9507c6ba36f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.749 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:02 compute-0 ovn_controller[97647]: 2026-01-27T19:28:02Z|00075|binding|INFO|Releasing lport a1115b00-0e91-4461-b1a5-9507c6ba36f8 from this chassis (sb_readonly=0)
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.778 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.780 106898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/400ee5e7-0154-41c9-b068-b9b4ba0c2fdc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/400ee5e7-0154-41c9-b068-b9b4ba0c2fdc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.780 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[613454fd-25f0-4654-9815-2ea17baa4f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.781 106898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: global
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     log         /dev/log local0 debug
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     log-tag     haproxy-metadata-proxy-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     user        root
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     group       root
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     maxconn     1024
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     pidfile     /var/lib/neutron/external/pids/400ee5e7-0154-41c9-b068-b9b4ba0c2fdc.pid.haproxy
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     daemon
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: defaults
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     log global
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     mode http
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     option httplog
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     option dontlognull
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     option http-server-close
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     option forwardfor
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     retries                 3
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     timeout http-request    30s
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     timeout connect         30s
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     timeout client          32s
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     timeout server          32s
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     timeout http-keep-alive 30s
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: listen listener
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     bind 169.254.169.254:80
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:     http-request add-header X-OVN-Network-ID 400ee5e7-0154-41c9-b068-b9b4ba0c2fdc
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 19:28:02 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:02.782 106898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc', 'env', 'PROCESS_TAG=haproxy-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/400ee5e7-0154-41c9-b068-b9b4ba0c2fdc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.836 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542082.836285, be961482-e05a-4655-96ea-7d4810738a3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.837 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] VM Started (Lifecycle Event)
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.878 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.887 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542082.8363903, be961482-e05a-4655-96ea-7d4810738a3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.888 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] VM Paused (Lifecycle Event)
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.920 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.935 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:28:02 compute-0 nova_compute[185480]: 2026-01-27 19:28:02.963 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:28:03 compute-0 nova_compute[185480]: 2026-01-27 19:28:03.235 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:03 compute-0 podman[248911]: 2026-01-27 19:28:03.282825609 +0000 UTC m=+0.110562869 container create e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 19:28:03 compute-0 podman[248911]: 2026-01-27 19:28:03.210579239 +0000 UTC m=+0.038316519 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 19:28:03 compute-0 systemd[1]: Started libpod-conmon-e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8.scope.
Jan 27 19:28:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 19:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953c46d3744a30a58fa35cb21e01889abd6bc14e4175c98ca80a9c71a6102cdf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 19:28:03 compute-0 podman[248911]: 2026-01-27 19:28:03.412420395 +0000 UTC m=+0.240157715 container init e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:28:03 compute-0 podman[248911]: 2026-01-27 19:28:03.425864244 +0000 UTC m=+0.253601504 container start e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 19:28:03 compute-0 neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc[248925]: [NOTICE]   (248929) : New worker (248931) forked
Jan 27 19:28:03 compute-0 neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc[248925]: [NOTICE]   (248929) : Loading success.
Jan 27 19:28:03 compute-0 nova_compute[185480]: 2026-01-27 19:28:03.551 185484 DEBUG nova.network.neutron [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Successfully created port: 30c6cddc-b940-43d0-9889-aa0f1836eb27 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.420 185484 DEBUG nova.compute.manager [req-96326f30-1526-4b81-86b2-36d849f229b8 req-6b369b2f-811f-43fe-a8a3-40c491882665 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.421 185484 DEBUG oslo_concurrency.lockutils [req-96326f30-1526-4b81-86b2-36d849f229b8 req-6b369b2f-811f-43fe-a8a3-40c491882665 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "be961482-e05a-4655-96ea-7d4810738a3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.421 185484 DEBUG oslo_concurrency.lockutils [req-96326f30-1526-4b81-86b2-36d849f229b8 req-6b369b2f-811f-43fe-a8a3-40c491882665 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.421 185484 DEBUG oslo_concurrency.lockutils [req-96326f30-1526-4b81-86b2-36d849f229b8 req-6b369b2f-811f-43fe-a8a3-40c491882665 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.421 185484 DEBUG nova.compute.manager [req-96326f30-1526-4b81-86b2-36d849f229b8 req-6b369b2f-811f-43fe-a8a3-40c491882665 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Processing event network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.422 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.427 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542084.4274476, be961482-e05a-4655-96ea-7d4810738a3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.428 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] VM Resumed (Lifecycle Event)
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.430 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.434 185484 INFO nova.virt.libvirt.driver [-] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Instance spawned successfully.
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.434 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.458 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.465 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.471 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.472 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.472 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.472 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.473 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.473 185484 DEBUG nova.virt.libvirt.driver [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.501 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.538 185484 INFO nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Took 10.33 seconds to spawn the instance on the hypervisor.
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.539 185484 DEBUG nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.610 185484 INFO nova.compute.manager [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Took 10.93 seconds to build instance.
Jan 27 19:28:04 compute-0 nova_compute[185480]: 2026-01-27 19:28:04.633 185484 DEBUG oslo_concurrency.lockutils [None req-39bf8363-7ba5-4a36-9a78-cd1657bda5db 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.046 185484 DEBUG nova.compute.manager [req-5e22d1eb-ce42-461a-92ad-00059319d575 req-4f5d94eb-a255-4392-a110-2b6270eafde8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Received event network-vif-plugged-f9c5e783-e45a-4c95-ac23-9dd75009c0e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.046 185484 DEBUG oslo_concurrency.lockutils [req-5e22d1eb-ce42-461a-92ad-00059319d575 req-4f5d94eb-a255-4392-a110-2b6270eafde8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.047 185484 DEBUG oslo_concurrency.lockutils [req-5e22d1eb-ce42-461a-92ad-00059319d575 req-4f5d94eb-a255-4392-a110-2b6270eafde8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.047 185484 DEBUG oslo_concurrency.lockutils [req-5e22d1eb-ce42-461a-92ad-00059319d575 req-4f5d94eb-a255-4392-a110-2b6270eafde8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.047 185484 DEBUG nova.compute.manager [req-5e22d1eb-ce42-461a-92ad-00059319d575 req-4f5d94eb-a255-4392-a110-2b6270eafde8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Processing event network-vif-plugged-f9c5e783-e45a-4c95-ac23-9dd75009c0e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.048 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Instance event wait completed in 10 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.053 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542085.0533054, 87df2e63-6d90-4f9b-9c89-e3156bc11b8b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.054 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] VM Resumed (Lifecycle Event)
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.058 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.066 185484 INFO nova.virt.libvirt.driver [-] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Instance spawned successfully.
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.066 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.077 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.091 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.097 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.097 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.098 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.099 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.100 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.100 185484 DEBUG nova.virt.libvirt.driver [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.122 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.163 185484 INFO nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Took 20.96 seconds to spawn the instance on the hypervisor.
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.164 185484 DEBUG nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.225 185484 INFO nova.compute.manager [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Took 21.61 seconds to build instance.
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.249 185484 DEBUG oslo_concurrency.lockutils [None req-08090eaa-7a07-4108-85d6-51d1f0101e37 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.653 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.655 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:05 compute-0 nova_compute[185480]: 2026-01-27 19:28:05.657 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:28:06 compute-0 nova_compute[185480]: 2026-01-27 19:28:06.050 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:06 compute-0 nova_compute[185480]: 2026-01-27 19:28:06.519 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:06 compute-0 nova_compute[185480]: 2026-01-27 19:28:06.894 185484 DEBUG nova.compute.manager [req-67ed6244-a2c0-4bfb-8edf-0f5ef2e081ba req-c1dca48f-0cce-4747-a96e-e99c351db38f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:06 compute-0 nova_compute[185480]: 2026-01-27 19:28:06.895 185484 DEBUG oslo_concurrency.lockutils [req-67ed6244-a2c0-4bfb-8edf-0f5ef2e081ba req-c1dca48f-0cce-4747-a96e-e99c351db38f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "be961482-e05a-4655-96ea-7d4810738a3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:06 compute-0 nova_compute[185480]: 2026-01-27 19:28:06.896 185484 DEBUG oslo_concurrency.lockutils [req-67ed6244-a2c0-4bfb-8edf-0f5ef2e081ba req-c1dca48f-0cce-4747-a96e-e99c351db38f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:06 compute-0 nova_compute[185480]: 2026-01-27 19:28:06.896 185484 DEBUG oslo_concurrency.lockutils [req-67ed6244-a2c0-4bfb-8edf-0f5ef2e081ba req-c1dca48f-0cce-4747-a96e-e99c351db38f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:06 compute-0 nova_compute[185480]: 2026-01-27 19:28:06.897 185484 DEBUG nova.compute.manager [req-67ed6244-a2c0-4bfb-8edf-0f5ef2e081ba req-c1dca48f-0cce-4747-a96e-e99c351db38f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] No waiting events found dispatching network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:28:06 compute-0 nova_compute[185480]: 2026-01-27 19:28:06.897 185484 WARNING nova.compute.manager [req-67ed6244-a2c0-4bfb-8edf-0f5ef2e081ba req-c1dca48f-0cce-4747-a96e-e99c351db38f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received unexpected event network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb for instance with vm_state active and task_state None.
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.408 185484 DEBUG nova.network.neutron [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Successfully updated port: 30c6cddc-b940-43d0-9889-aa0f1836eb27 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.465 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.466 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquired lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.466 185484 DEBUG nova.network.neutron [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.601 185484 DEBUG nova.compute.manager [req-2182a381-74a8-45d0-8c0a-792531270a0b req-2aa35c46-f929-4d16-81d2-3fddd11f3a07 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Received event network-vif-plugged-f9c5e783-e45a-4c95-ac23-9dd75009c0e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.602 185484 DEBUG oslo_concurrency.lockutils [req-2182a381-74a8-45d0-8c0a-792531270a0b req-2aa35c46-f929-4d16-81d2-3fddd11f3a07 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.603 185484 DEBUG oslo_concurrency.lockutils [req-2182a381-74a8-45d0-8c0a-792531270a0b req-2aa35c46-f929-4d16-81d2-3fddd11f3a07 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.604 185484 DEBUG oslo_concurrency.lockutils [req-2182a381-74a8-45d0-8c0a-792531270a0b req-2aa35c46-f929-4d16-81d2-3fddd11f3a07 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.605 185484 DEBUG nova.compute.manager [req-2182a381-74a8-45d0-8c0a-792531270a0b req-2aa35c46-f929-4d16-81d2-3fddd11f3a07 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] No waiting events found dispatching network-vif-plugged-f9c5e783-e45a-4c95-ac23-9dd75009c0e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:28:07 compute-0 nova_compute[185480]: 2026-01-27 19:28:07.606 185484 WARNING nova.compute.manager [req-2182a381-74a8-45d0-8c0a-792531270a0b req-2aa35c46-f929-4d16-81d2-3fddd11f3a07 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Received unexpected event network-vif-plugged-f9c5e783-e45a-4c95-ac23-9dd75009c0e8 for instance with vm_state active and task_state None.
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.196 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.197 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.198 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.198 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.199 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.200 185484 INFO nova.compute.manager [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Terminating instance
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.201 185484 DEBUG nova.compute.manager [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.202 185484 DEBUG nova.network.neutron [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:28:08 compute-0 kernel: tapf9c5e783-e4 (unregistering): left promiscuous mode
Jan 27 19:28:08 compute-0 NetworkManager[56191]: <info>  [1769542088.2499] device (tapf9c5e783-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.255 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:08 compute-0 ovn_controller[97647]: 2026-01-27T19:28:08Z|00076|binding|INFO|Releasing lport f9c5e783-e45a-4c95-ac23-9dd75009c0e8 from this chassis (sb_readonly=0)
Jan 27 19:28:08 compute-0 ovn_controller[97647]: 2026-01-27T19:28:08Z|00077|binding|INFO|Setting lport f9c5e783-e45a-4c95-ac23-9dd75009c0e8 down in Southbound
Jan 27 19:28:08 compute-0 ovn_controller[97647]: 2026-01-27T19:28:08Z|00078|binding|INFO|Removing iface tapf9c5e783-e4 ovn-installed in OVS
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.271 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:3c:75 10.100.0.6'], port_security=['fa:16:3e:1f:3c:75 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '87df2e63-6d90-4f9b-9c89-e3156bc11b8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c11641b-5524-434f-b21f-ef4387b27e56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3600960457704cf7bab0031c88d94f9d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f0ec1ae-1b56-407d-b130-c9856f04cabc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b019432-4716-4db3-a0f5-056a1de5deec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=f9c5e783-e45a-4c95-ac23-9dd75009c0e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.275 106898 INFO neutron.agent.ovn.metadata.agent [-] Port f9c5e783-e45a-4c95-ac23-9dd75009c0e8 in datapath 0c11641b-5524-434f-b21f-ef4387b27e56 unbound from our chassis
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.274 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.280 106898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c11641b-5524-434f-b21f-ef4387b27e56, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.282 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2594f7-f20b-4ee5-a34e-768070509666]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.283 106898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56 namespace which is not needed anymore
Jan 27 19:28:08 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 27 19:28:08 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 4.731s CPU time.
Jan 27 19:28:08 compute-0 systemd-machined[156762]: Machine qemu-6-instance-00000006 terminated.
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.429 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.436 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.479 185484 INFO nova.virt.libvirt.driver [-] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Instance destroyed successfully.
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.480 185484 DEBUG nova.objects.instance [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lazy-loading 'resources' on Instance uuid 87df2e63-6d90-4f9b-9c89-e3156bc11b8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:28:08 compute-0 neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56[248651]: [NOTICE]   (248660) : haproxy version is 2.8.14-c23fe91
Jan 27 19:28:08 compute-0 neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56[248651]: [NOTICE]   (248660) : path to executable is /usr/sbin/haproxy
Jan 27 19:28:08 compute-0 neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56[248651]: [WARNING]  (248660) : Exiting Master process...
Jan 27 19:28:08 compute-0 neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56[248651]: [WARNING]  (248660) : Exiting Master process...
Jan 27 19:28:08 compute-0 neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56[248651]: [ALERT]    (248660) : Current worker (248663) exited with code 143 (Terminated)
Jan 27 19:28:08 compute-0 neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56[248651]: [WARNING]  (248660) : All workers exited. Exiting... (0)
Jan 27 19:28:08 compute-0 systemd[1]: libpod-d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9.scope: Deactivated successfully.
Jan 27 19:28:08 compute-0 conmon[248651]: conmon d15476bf79dbaf6eaca9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9.scope/container/memory.events
Jan 27 19:28:08 compute-0 podman[248965]: 2026-01-27 19:28:08.501766161 +0000 UTC m=+0.085869015 container died d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.553 185484 DEBUG nova.virt.libvirt.vif [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-358082491',display_name='tempest-ServerAddressesTestJSON-server-358082491',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-358082491',id=6,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T19:28:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3600960457704cf7bab0031c88d94f9d',ramdisk_id='',reservation_id='r-fj1kutnx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-757206843',owner_user_name='tempest-ServerAddressesTestJSON-757206843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T19:28:05Z,user_data=None,user_id='dc82377b5f6f4792aeae1f181b8cf53b',uuid=87df2e63-6d90-4f9b-9c89-e3156bc11b8b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 19:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9-userdata-shm.mount: Deactivated successfully.
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.554 185484 DEBUG nova.network.os_vif_util [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Converting VIF {"id": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "address": "fa:16:3e:1f:3c:75", "network": {"id": "0c11641b-5524-434f-b21f-ef4387b27e56", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1707744569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3600960457704cf7bab0031c88d94f9d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9c5e783-e4", "ovs_interfaceid": "f9c5e783-e45a-4c95-ac23-9dd75009c0e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.556 185484 DEBUG nova.network.os_vif_util [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3c:75,bridge_name='br-int',has_traffic_filtering=True,id=f9c5e783-e45a-4c95-ac23-9dd75009c0e8,network=Network(0c11641b-5524-434f-b21f-ef4387b27e56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9c5e783-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.556 185484 DEBUG os_vif [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3c:75,bridge_name='br-int',has_traffic_filtering=True,id=f9c5e783-e45a-4c95-ac23-9dd75009c0e8,network=Network(0c11641b-5524-434f-b21f-ef4387b27e56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9c5e783-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.558 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-38f1dea9aba8f1c0a8ce01a277487ad117a1f1fba3f1a71e0ffeb838223027f3-merged.mount: Deactivated successfully.
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.559 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9c5e783-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.564 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.567 185484 INFO os_vif [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:3c:75,bridge_name='br-int',has_traffic_filtering=True,id=f9c5e783-e45a-4c95-ac23-9dd75009c0e8,network=Network(0c11641b-5524-434f-b21f-ef4387b27e56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9c5e783-e4')
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.568 185484 INFO nova.virt.libvirt.driver [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Deleting instance files /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b_del
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.569 185484 INFO nova.virt.libvirt.driver [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Deletion of /var/lib/nova/instances/87df2e63-6d90-4f9b-9c89-e3156bc11b8b_del complete
Jan 27 19:28:08 compute-0 podman[248965]: 2026-01-27 19:28:08.585885043 +0000 UTC m=+0.169987897 container cleanup d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 19:28:08 compute-0 systemd[1]: libpod-conmon-d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9.scope: Deactivated successfully.
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.645 185484 INFO nova.compute.manager [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Took 0.44 seconds to destroy the instance on the hypervisor.
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.646 185484 DEBUG oslo.service.loopingcall [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.647 185484 DEBUG nova.compute.manager [-] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.647 185484 DEBUG nova.network.neutron [-] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:28:08 compute-0 podman[249010]: 2026-01-27 19:28:08.71560039 +0000 UTC m=+0.097331845 container remove d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.731 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[492b27bd-0b7b-44a4-853a-434462168814]: (4, ('Tue Jan 27 07:28:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56 (d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9)\nd15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9\nTue Jan 27 07:28:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56 (d15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9)\nd15476bf79dbaf6eaca9ae885c66a0da7e73922033eff27a1a78bf32f1b1b3b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.735 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9dac30-2aaf-45f1-a70f-a4cea5ae171a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.749 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c11641b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.752 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:08 compute-0 kernel: tap0c11641b-50: left promiscuous mode
Jan 27 19:28:08 compute-0 nova_compute[185480]: 2026-01-27 19:28:08.784 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.788 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[83c36fb9-022a-451e-823e-45dd6bf87f98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.803 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[a40248b6-ef14-42ee-a1f0-d56daadf3a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.804 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[40f99f7d-94d2-417c-bf90-b8f4969b7cc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.825 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[aab763d3-f2ee-4d67-9be3-15d00b9983c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518993, 'reachable_time': 27914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249023, 'error': None, 'target': 'ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.828 107353 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0c11641b-5524-434f-b21f-ef4387b27e56 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 19:28:08 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:08.828 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a1252e-fd84-487d-8867-7428496f5500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d0c11641b\x2d5524\x2d434f\x2db21f\x2def4387b27e56.mount: Deactivated successfully.
Jan 27 19:28:09 compute-0 nova_compute[185480]: 2026-01-27 19:28:09.271 185484 DEBUG nova.compute.manager [req-b0167e75-faad-478d-843e-8a6b044d5f60 req-d01c9363-fde4-4945-8d62-acfe5c728b3e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received event network-changed-30c6cddc-b940-43d0-9889-aa0f1836eb27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:09 compute-0 nova_compute[185480]: 2026-01-27 19:28:09.271 185484 DEBUG nova.compute.manager [req-b0167e75-faad-478d-843e-8a6b044d5f60 req-d01c9363-fde4-4945-8d62-acfe5c728b3e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Refreshing instance network info cache due to event network-changed-30c6cddc-b940-43d0-9889-aa0f1836eb27. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:28:09 compute-0 nova_compute[185480]: 2026-01-27 19:28:09.272 185484 DEBUG oslo_concurrency.lockutils [req-b0167e75-faad-478d-843e-8a6b044d5f60 req-d01c9363-fde4-4945-8d62-acfe5c728b3e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.291 185484 DEBUG nova.compute.manager [req-6ef74f62-d253-4827-ab47-a9aff4c898c7 req-2123bb6d-b22c-4b05-9563-4f048eeb366c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-changed-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.291 185484 DEBUG nova.compute.manager [req-6ef74f62-d253-4827-ab47-a9aff4c898c7 req-2123bb6d-b22c-4b05-9563-4f048eeb366c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Refreshing instance network info cache due to event network-changed-b0ee253c-f6f7-42b0-906e-b575d0104fbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.292 185484 DEBUG oslo_concurrency.lockutils [req-6ef74f62-d253-4827-ab47-a9aff4c898c7 req-2123bb6d-b22c-4b05-9563-4f048eeb366c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.292 185484 DEBUG oslo_concurrency.lockutils [req-6ef74f62-d253-4827-ab47-a9aff4c898c7 req-2123bb6d-b22c-4b05-9563-4f048eeb366c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.293 185484 DEBUG nova.network.neutron [req-6ef74f62-d253-4827-ab47-a9aff4c898c7 req-2123bb6d-b22c-4b05-9563-4f048eeb366c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Refreshing network info cache for port b0ee253c-f6f7-42b0-906e-b575d0104fbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:28:10 compute-0 podman[249025]: 2026-01-27 19:28:10.327557082 +0000 UTC m=+0.094047585 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 27 19:28:10 compute-0 podman[249024]: 2026-01-27 19:28:10.346268231 +0000 UTC m=+0.111809441 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.525 185484 DEBUG nova.network.neutron [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Updating instance_info_cache with network_info: [{"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.575 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Releasing lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.585 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Instance network_info: |[{"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.585 185484 DEBUG nova.network.neutron [-] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.587 185484 DEBUG oslo_concurrency.lockutils [req-b0167e75-faad-478d-843e-8a6b044d5f60 req-d01c9363-fde4-4945-8d62-acfe5c728b3e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.587 185484 DEBUG nova.network.neutron [req-b0167e75-faad-478d-843e-8a6b044d5f60 req-d01c9363-fde4-4945-8d62-acfe5c728b3e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Refreshing network info cache for port 30c6cddc-b940-43d0-9889-aa0f1836eb27 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.594 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Start _get_guest_xml network_info=[{"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.605 185484 WARNING nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.612 185484 DEBUG nova.virt.libvirt.host [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.614 185484 DEBUG nova.virt.libvirt.host [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.620 185484 DEBUG nova.virt.libvirt.host [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.621 185484 DEBUG nova.virt.libvirt.host [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.623 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.624 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:26:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='49f81b8c-e0df-4a53-87c6-69576be59651',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.625 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.626 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.627 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.629 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.630 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.631 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.633 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.634 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.635 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.636 185484 DEBUG nova.virt.hardware [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.642 185484 DEBUG nova.virt.libvirt.vif [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-215967915',display_name='tempest-ServersTestJSON-server-215967915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-215967915',id=8,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM+QmMLcjSlAuSHlxXPE5hwyIk6QDdNHwRQEPXpyKx1YKoNRY0mNo/wl7iHULAMPj6jIL/1KYvwvCBBPc4VqXQWK9yCFEblr3fx+0cAIeShxFAwnGVT9IeM3bwAWEjWltQ==',key_name='tempest-keypair-517750285',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73cc7e581727408e9ec8407b9dbf203f',ramdisk_id='',reservation_id='r-8optdkg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-854293747',owner_user_name='tempest-ServersTestJSON-854293747-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:28:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='edcee212fadb4bbf9267ca664aca5ec7',uuid=22a0bada-8656-451c-af9e-743901138320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.644 185484 DEBUG nova.network.os_vif_util [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Converting VIF {"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.645 185484 DEBUG nova.network.os_vif_util [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:0a:d1,bridge_name='br-int',has_traffic_filtering=True,id=30c6cddc-b940-43d0-9889-aa0f1836eb27,network=Network(5d551bfa-9f0b-4c1d-8b77-db9b22bd243c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c6cddc-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.647 185484 DEBUG nova.objects.instance [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lazy-loading 'pci_devices' on Instance uuid 22a0bada-8656-451c-af9e-743901138320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.652 185484 INFO nova.compute.manager [-] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Took 2.00 seconds to deallocate network for instance.
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.749 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <uuid>22a0bada-8656-451c-af9e-743901138320</uuid>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <name>instance-00000008</name>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <memory>131072</memory>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <nova:name>tempest-ServersTestJSON-server-215967915</nova:name>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:28:10</nova:creationTime>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <nova:flavor name="m1.nano">
Jan 27 19:28:10 compute-0 nova_compute[185480]:         <nova:memory>128</nova:memory>
Jan 27 19:28:10 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:28:10 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:28:10 compute-0 nova_compute[185480]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 19:28:10 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:28:10 compute-0 nova_compute[185480]:         <nova:user uuid="edcee212fadb4bbf9267ca664aca5ec7">tempest-ServersTestJSON-854293747-project-member</nova:user>
Jan 27 19:28:10 compute-0 nova_compute[185480]:         <nova:project uuid="73cc7e581727408e9ec8407b9dbf203f">tempest-ServersTestJSON-854293747</nova:project>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="729797c6-2677-44bd-a4a8-949d1f57b0a2"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:28:10 compute-0 nova_compute[185480]:         <nova:port uuid="30c6cddc-b940-43d0-9889-aa0f1836eb27">
Jan 27 19:28:10 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <system>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <entry name="serial">22a0bada-8656-451c-af9e-743901138320</entry>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <entry name="uuid">22a0bada-8656-451c-af9e-743901138320</entry>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </system>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <os>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   </os>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <features>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   </features>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk.config"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:0c:0a:d1"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <target dev="tap30c6cddc-b9"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/console.log" append="off"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <video>
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </video>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:28:10 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:28:10 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:28:10 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:28:10 compute-0 nova_compute[185480]: </domain>
Jan 27 19:28:10 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.760 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Preparing to wait for external event network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.760 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "22a0bada-8656-451c-af9e-743901138320-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.761 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.761 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.763 185484 DEBUG nova.virt.libvirt.vif [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-215967915',display_name='tempest-ServersTestJSON-server-215967915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-215967915',id=8,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM+QmMLcjSlAuSHlxXPE5hwyIk6QDdNHwRQEPXpyKx1YKoNRY0mNo/wl7iHULAMPj6jIL/1KYvwvCBBPc4VqXQWK9yCFEblr3fx+0cAIeShxFAwnGVT9IeM3bwAWEjWltQ==',key_name='tempest-keypair-517750285',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73cc7e581727408e9ec8407b9dbf203f',ramdisk_id='',reservation_id='r-8optdkg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-854293747',owner_user_name='tempest-ServersTestJSON-854293747-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:28:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='edcee212fadb4bbf9267ca664aca5ec7',uuid=22a0bada-8656-451c-af9e-743901138320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.763 185484 DEBUG nova.network.os_vif_util [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Converting VIF {"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.764 185484 DEBUG nova.network.os_vif_util [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:0a:d1,bridge_name='br-int',has_traffic_filtering=True,id=30c6cddc-b940-43d0-9889-aa0f1836eb27,network=Network(5d551bfa-9f0b-4c1d-8b77-db9b22bd243c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c6cddc-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.765 185484 DEBUG os_vif [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:0a:d1,bridge_name='br-int',has_traffic_filtering=True,id=30c6cddc-b940-43d0-9889-aa0f1836eb27,network=Network(5d551bfa-9f0b-4c1d-8b77-db9b22bd243c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c6cddc-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.766 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.767 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.767 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.777 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.778 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.783 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.783 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30c6cddc-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.784 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30c6cddc-b9, col_values=(('external_ids', {'iface-id': '30c6cddc-b940-43d0-9889-aa0f1836eb27', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:0a:d1', 'vm-uuid': '22a0bada-8656-451c-af9e-743901138320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:10 compute-0 NetworkManager[56191]: <info>  [1769542090.7874] manager: (tap30c6cddc-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.786 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.790 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.801 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.803 185484 INFO os_vif [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:0a:d1,bridge_name='br-int',has_traffic_filtering=True,id=30c6cddc-b940-43d0-9889-aa0f1836eb27,network=Network(5d551bfa-9f0b-4c1d-8b77-db9b22bd243c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c6cddc-b9')
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.904 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.905 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.905 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] No VIF found with MAC fa:16:3e:0c:0a:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.906 185484 INFO nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Using config drive
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.923 185484 DEBUG nova.compute.provider_tree [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.951 185484 DEBUG nova.scheduler.client.report [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:28:10 compute-0 nova_compute[185480]: 2026-01-27 19:28:10.995 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.031 185484 INFO nova.scheduler.client.report [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Deleted allocations for instance 87df2e63-6d90-4f9b-9c89-e3156bc11b8b
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.123 185484 DEBUG oslo_concurrency.lockutils [None req-4a2bee84-1d51-4f91-a023-e647b20ef8f6 dc82377b5f6f4792aeae1f181b8cf53b 3600960457704cf7bab0031c88d94f9d - - default default] Lock "87df2e63-6d90-4f9b-9c89-e3156bc11b8b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.428 185484 DEBUG nova.compute.manager [req-cac96e71-d194-4deb-89d8-82c4ebfc4d64 req-f7814bab-c908-4407-951e-6d4c2a9d3fed bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Received event network-vif-deleted-f9c5e783-e45a-4c95-ac23-9dd75009c0e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.628 185484 INFO nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Creating config drive at /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk.config
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.636 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1lebzvs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.770 185484 DEBUG oslo_concurrency.processutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1lebzvs" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:11 compute-0 kernel: tap30c6cddc-b9: entered promiscuous mode
Jan 27 19:28:11 compute-0 NetworkManager[56191]: <info>  [1769542091.8741] manager: (tap30c6cddc-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 27 19:28:11 compute-0 ovn_controller[97647]: 2026-01-27T19:28:11Z|00079|binding|INFO|Claiming lport 30c6cddc-b940-43d0-9889-aa0f1836eb27 for this chassis.
Jan 27 19:28:11 compute-0 ovn_controller[97647]: 2026-01-27T19:28:11Z|00080|binding|INFO|30c6cddc-b940-43d0-9889-aa0f1836eb27: Claiming fa:16:3e:0c:0a:d1 10.100.0.14
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.880 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.889 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:0a:d1 10.100.0.14'], port_security=['fa:16:3e:0c:0a:d1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '22a0bada-8656-451c-af9e-743901138320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73cc7e581727408e9ec8407b9dbf203f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf6281c1-6104-4edf-92bd-c5d51eb4346e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=617c4bbb-d204-433c-848d-6e2c41736c50, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=30c6cddc-b940-43d0-9889-aa0f1836eb27) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.893 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 30c6cddc-b940-43d0-9889-aa0f1836eb27 in datapath 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c bound to our chassis
Jan 27 19:28:11 compute-0 ovn_controller[97647]: 2026-01-27T19:28:11Z|00081|binding|INFO|Setting lport 30c6cddc-b940-43d0-9889-aa0f1836eb27 ovn-installed in OVS
Jan 27 19:28:11 compute-0 ovn_controller[97647]: 2026-01-27T19:28:11Z|00082|binding|INFO|Setting lport 30c6cddc-b940-43d0-9889-aa0f1836eb27 up in Southbound
Jan 27 19:28:11 compute-0 nova_compute[185480]: 2026-01-27 19:28:11.898 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.898 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.918 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e3d374-a17d-4584-9117-0c9d6e02f643]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.920 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d551bfa-91 in ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.925 238834 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d551bfa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.925 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[2aeb711e-9c1f-456c-a95c-11e1708a7576]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.928 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2ccae2-2f59-4774-9a26-79cdbb50fe2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.947 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[36df6a48-5c1c-465b-9153-cd260bd9cf93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:11 compute-0 systemd-machined[156762]: New machine qemu-8-instance-00000008.
Jan 27 19:28:11 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 27 19:28:11 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:11.987 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc72039-6465-4168-809e-d95615d53701]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:11 compute-0 systemd-udevd[249088]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:28:12 compute-0 NetworkManager[56191]: <info>  [1769542092.0146] device (tap30c6cddc-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:28:12 compute-0 NetworkManager[56191]: <info>  [1769542092.0202] device (tap30c6cddc-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.049 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[e539ef6b-57d8-4643-a832-a123fc4d81b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 NetworkManager[56191]: <info>  [1769542092.0606] manager: (tap5d551bfa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.057 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[3c293232-2608-4ab2-af2d-6e1665d56c1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.109 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a580fd-40fd-454d-9c5c-19d1a86d26d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.114 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[76cf9006-d36b-4d98-a5fd-9b7ad11d2c92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 NetworkManager[56191]: <info>  [1769542092.1493] device (tap5d551bfa-90): carrier: link connected
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.163 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[49891240-5b6c-42b8-99f8-121582a17266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.185 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1ae5b2-82b3-4ad9-880f-fd3060ca2fed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d551bfa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:97:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520871, 'reachable_time': 42042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249120, 'error': None, 'target': 'ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.199 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[ec41a2e1-ad34-491e-a4ab-9c0e706cc40e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:9721'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520871, 'tstamp': 520871}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249124, 'error': None, 'target': 'ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.215 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[240147c0-e186-4c31-b9d7-c7ae983ffe97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d551bfa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:97:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520871, 'reachable_time': 42042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249126, 'error': None, 'target': 'ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.248 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[991f6fb6-1cf2-4198-97ff-1bd38401cb95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.299 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542092.2992857, 22a0bada-8656-451c-af9e-743901138320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.300 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] VM Started (Lifecycle Event)
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.326 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[46c9fc0b-8953-457c-abcc-7ee49a1e5bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.329 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d551bfa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.329 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.330 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d551bfa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:12 compute-0 NetworkManager[56191]: <info>  [1769542092.3354] manager: (tap5d551bfa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 27 19:28:12 compute-0 kernel: tap5d551bfa-90: entered promiscuous mode
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.343 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d551bfa-90, col_values=(('external_ids', {'iface-id': 'f01011ae-3ca2-480b-bb28-dfdb7656c380'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.345 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:12 compute-0 ovn_controller[97647]: 2026-01-27T19:28:12Z|00083|binding|INFO|Releasing lport f01011ae-3ca2-480b-bb28-dfdb7656c380 from this chassis (sb_readonly=0)
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.352 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.359 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542092.3032804, 22a0bada-8656-451c-af9e-743901138320 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.359 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] VM Paused (Lifecycle Event)
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.374 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.376 106898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d551bfa-9f0b-4c1d-8b77-db9b22bd243c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d551bfa-9f0b-4c1d-8b77-db9b22bd243c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.377 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.378 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[d3aedde4-8f9f-4b54-8db1-b64389140ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.378 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.380 106898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: global
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     log         /dev/log local0 debug
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     log-tag     haproxy-metadata-proxy-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     user        root
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     group       root
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     maxconn     1024
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     pidfile     /var/lib/neutron/external/pids/5d551bfa-9f0b-4c1d-8b77-db9b22bd243c.pid.haproxy
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     daemon
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: defaults
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     log global
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     mode http
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     option httplog
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     option dontlognull
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     option http-server-close
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     option forwardfor
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     retries                 3
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     timeout http-request    30s
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     timeout connect         30s
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     timeout client          32s
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     timeout server          32s
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     timeout http-keep-alive 30s
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: listen listener
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     bind 169.254.169.254:80
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:     http-request add-header X-OVN-Network-ID 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 19:28:12 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:12.383 106898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'env', 'PROCESS_TAG=haproxy-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d551bfa-9f0b-4c1d-8b77-db9b22bd243c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.387 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:28:12 compute-0 nova_compute[185480]: 2026-01-27 19:28:12.406 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:28:12 compute-0 podman[249156]: 2026-01-27 19:28:12.847555501 +0000 UTC m=+0.081858736 container create 1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 19:28:12 compute-0 podman[249156]: 2026-01-27 19:28:12.798493389 +0000 UTC m=+0.032796674 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 19:28:12 compute-0 systemd[1]: Started libpod-conmon-1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963.scope.
Jan 27 19:28:12 compute-0 systemd[1]: Started libcrun container.
Jan 27 19:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b010637e3240e4eba0a62db66ff7a624cda742a873c8d22c6e7cf3b1dc220b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 19:28:13 compute-0 podman[249156]: 2026-01-27 19:28:13.012142894 +0000 UTC m=+0.246446159 container init 1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:28:13 compute-0 podman[249168]: 2026-01-27 19:28:13.014998983 +0000 UTC m=+0.130420746 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container, vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9, distribution-scope=public, container_name=kepler, io.openshift.expose-services=, io.buildah.version=1.29.0)
Jan 27 19:28:13 compute-0 podman[249156]: 2026-01-27 19:28:13.02750903 +0000 UTC m=+0.261812275 container start 1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:28:13 compute-0 neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c[249182]: [NOTICE]   (249194) : New worker (249196) forked
Jan 27 19:28:13 compute-0 neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c[249182]: [NOTICE]   (249194) : Loading success.
Jan 27 19:28:13 compute-0 nova_compute[185480]: 2026-01-27 19:28:13.260 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:13 compute-0 nova_compute[185480]: 2026-01-27 19:28:13.992 185484 DEBUG nova.compute.manager [req-5c1b0451-f995-4249-a863-0a6308433b18 req-70938843-9256-408d-a3c8-1599ea2f90b4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received event network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:13 compute-0 nova_compute[185480]: 2026-01-27 19:28:13.994 185484 DEBUG oslo_concurrency.lockutils [req-5c1b0451-f995-4249-a863-0a6308433b18 req-70938843-9256-408d-a3c8-1599ea2f90b4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "22a0bada-8656-451c-af9e-743901138320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:13 compute-0 nova_compute[185480]: 2026-01-27 19:28:13.994 185484 DEBUG oslo_concurrency.lockutils [req-5c1b0451-f995-4249-a863-0a6308433b18 req-70938843-9256-408d-a3c8-1599ea2f90b4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:13 compute-0 nova_compute[185480]: 2026-01-27 19:28:13.995 185484 DEBUG oslo_concurrency.lockutils [req-5c1b0451-f995-4249-a863-0a6308433b18 req-70938843-9256-408d-a3c8-1599ea2f90b4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:13 compute-0 nova_compute[185480]: 2026-01-27 19:28:13.996 185484 DEBUG nova.compute.manager [req-5c1b0451-f995-4249-a863-0a6308433b18 req-70938843-9256-408d-a3c8-1599ea2f90b4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Processing event network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:28:13 compute-0 nova_compute[185480]: 2026-01-27 19:28:13.998 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.022 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542094.0152085, 22a0bada-8656-451c-af9e-743901138320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.023 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] VM Resumed (Lifecycle Event)
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.035 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.043 185484 INFO nova.virt.libvirt.driver [-] [instance: 22a0bada-8656-451c-af9e-743901138320] Instance spawned successfully.
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.044 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.047 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.053 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.068 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.069 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.070 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.071 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.071 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.072 185484 DEBUG nova.virt.libvirt.driver [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.079 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.137 185484 INFO nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Took 13.69 seconds to spawn the instance on the hypervisor.
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.138 185484 DEBUG nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.220 185484 INFO nova.compute.manager [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Took 14.27 seconds to build instance.
Jan 27 19:28:14 compute-0 nova_compute[185480]: 2026-01-27 19:28:14.255 185484 DEBUG oslo_concurrency.lockutils [None req-cc96d7b2-9264-4952-a817-8a53ec30e6b6 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "22a0bada-8656-451c-af9e-743901138320" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:15 compute-0 nova_compute[185480]: 2026-01-27 19:28:15.099 185484 DEBUG nova.network.neutron [req-6ef74f62-d253-4827-ab47-a9aff4c898c7 req-2123bb6d-b22c-4b05-9563-4f048eeb366c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updated VIF entry in instance network info cache for port b0ee253c-f6f7-42b0-906e-b575d0104fbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:28:15 compute-0 nova_compute[185480]: 2026-01-27 19:28:15.099 185484 DEBUG nova.network.neutron [req-6ef74f62-d253-4827-ab47-a9aff4c898c7 req-2123bb6d-b22c-4b05-9563-4f048eeb366c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:15 compute-0 nova_compute[185480]: 2026-01-27 19:28:15.252 185484 DEBUG oslo_concurrency.lockutils [req-6ef74f62-d253-4827-ab47-a9aff4c898c7 req-2123bb6d-b22c-4b05-9563-4f048eeb366c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:28:15 compute-0 nova_compute[185480]: 2026-01-27 19:28:15.674 185484 DEBUG nova.network.neutron [req-b0167e75-faad-478d-843e-8a6b044d5f60 req-d01c9363-fde4-4945-8d62-acfe5c728b3e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Updated VIF entry in instance network info cache for port 30c6cddc-b940-43d0-9889-aa0f1836eb27. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:28:15 compute-0 nova_compute[185480]: 2026-01-27 19:28:15.675 185484 DEBUG nova.network.neutron [req-b0167e75-faad-478d-843e-8a6b044d5f60 req-d01c9363-fde4-4945-8d62-acfe5c728b3e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Updating instance_info_cache with network_info: [{"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:15 compute-0 nova_compute[185480]: 2026-01-27 19:28:15.704 185484 DEBUG oslo_concurrency.lockutils [req-b0167e75-faad-478d-843e-8a6b044d5f60 req-d01c9363-fde4-4945-8d62-acfe5c728b3e bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:28:15 compute-0 nova_compute[185480]: 2026-01-27 19:28:15.789 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:16 compute-0 nova_compute[185480]: 2026-01-27 19:28:16.098 185484 DEBUG nova.compute.manager [req-b583b057-7744-41f5-b1cb-b121ceddbcde req-5e3f8e7e-1fb2-4a0f-9b18-f02fc0b825c3 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received event network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:16 compute-0 nova_compute[185480]: 2026-01-27 19:28:16.098 185484 DEBUG oslo_concurrency.lockutils [req-b583b057-7744-41f5-b1cb-b121ceddbcde req-5e3f8e7e-1fb2-4a0f-9b18-f02fc0b825c3 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "22a0bada-8656-451c-af9e-743901138320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:16 compute-0 nova_compute[185480]: 2026-01-27 19:28:16.098 185484 DEBUG oslo_concurrency.lockutils [req-b583b057-7744-41f5-b1cb-b121ceddbcde req-5e3f8e7e-1fb2-4a0f-9b18-f02fc0b825c3 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:16 compute-0 nova_compute[185480]: 2026-01-27 19:28:16.099 185484 DEBUG oslo_concurrency.lockutils [req-b583b057-7744-41f5-b1cb-b121ceddbcde req-5e3f8e7e-1fb2-4a0f-9b18-f02fc0b825c3 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:16 compute-0 nova_compute[185480]: 2026-01-27 19:28:16.099 185484 DEBUG nova.compute.manager [req-b583b057-7744-41f5-b1cb-b121ceddbcde req-5e3f8e7e-1fb2-4a0f-9b18-f02fc0b825c3 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] No waiting events found dispatching network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:28:16 compute-0 nova_compute[185480]: 2026-01-27 19:28:16.100 185484 WARNING nova.compute.manager [req-b583b057-7744-41f5-b1cb-b121ceddbcde req-5e3f8e7e-1fb2-4a0f-9b18-f02fc0b825c3 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received unexpected event network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 for instance with vm_state active and task_state None.
Jan 27 19:28:16 compute-0 ovn_controller[97647]: 2026-01-27T19:28:16Z|00084|binding|INFO|Releasing lport a1115b00-0e91-4461-b1a5-9507c6ba36f8 from this chassis (sb_readonly=0)
Jan 27 19:28:16 compute-0 ovn_controller[97647]: 2026-01-27T19:28:16Z|00085|binding|INFO|Releasing lport f01011ae-3ca2-480b-bb28-dfdb7656c380 from this chassis (sb_readonly=0)
Jan 27 19:28:16 compute-0 nova_compute[185480]: 2026-01-27 19:28:16.354 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:18 compute-0 sshd-session[249206]: Received disconnect from 91.224.92.190 port 50282:11:  [preauth]
Jan 27 19:28:18 compute-0 sshd-session[249206]: Disconnected from authenticating user root 91.224.92.190 port 50282 [preauth]
Jan 27 19:28:18 compute-0 nova_compute[185480]: 2026-01-27 19:28:18.265 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:18 compute-0 podman[249208]: 2026-01-27 19:28:18.349888846 +0000 UTC m=+0.124327087 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 19:28:19 compute-0 ovn_controller[97647]: 2026-01-27T19:28:19Z|00086|binding|INFO|Releasing lport a1115b00-0e91-4461-b1a5-9507c6ba36f8 from this chassis (sb_readonly=0)
Jan 27 19:28:19 compute-0 ovn_controller[97647]: 2026-01-27T19:28:19Z|00087|binding|INFO|Releasing lport f01011ae-3ca2-480b-bb28-dfdb7656c380 from this chassis (sb_readonly=0)
Jan 27 19:28:19 compute-0 nova_compute[185480]: 2026-01-27 19:28:19.523 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:20 compute-0 nova_compute[185480]: 2026-01-27 19:28:20.125 185484 DEBUG nova.compute.manager [req-06869fc3-ca14-4239-a8e1-2f8e898cfab1 req-45b1df1b-e4be-412a-95bc-f8e459ebb97f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received event network-changed-30c6cddc-b940-43d0-9889-aa0f1836eb27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:20 compute-0 nova_compute[185480]: 2026-01-27 19:28:20.126 185484 DEBUG nova.compute.manager [req-06869fc3-ca14-4239-a8e1-2f8e898cfab1 req-45b1df1b-e4be-412a-95bc-f8e459ebb97f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Refreshing instance network info cache due to event network-changed-30c6cddc-b940-43d0-9889-aa0f1836eb27. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:28:20 compute-0 nova_compute[185480]: 2026-01-27 19:28:20.127 185484 DEBUG oslo_concurrency.lockutils [req-06869fc3-ca14-4239-a8e1-2f8e898cfab1 req-45b1df1b-e4be-412a-95bc-f8e459ebb97f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:28:20 compute-0 nova_compute[185480]: 2026-01-27 19:28:20.128 185484 DEBUG oslo_concurrency.lockutils [req-06869fc3-ca14-4239-a8e1-2f8e898cfab1 req-45b1df1b-e4be-412a-95bc-f8e459ebb97f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:28:20 compute-0 nova_compute[185480]: 2026-01-27 19:28:20.129 185484 DEBUG nova.network.neutron [req-06869fc3-ca14-4239-a8e1-2f8e898cfab1 req-45b1df1b-e4be-412a-95bc-f8e459ebb97f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Refreshing network info cache for port 30c6cddc-b940-43d0-9889-aa0f1836eb27 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:28:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:20.539 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:20.541 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:20.543 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:20 compute-0 nova_compute[185480]: 2026-01-27 19:28:20.793 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.562 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "22a0bada-8656-451c-af9e-743901138320" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.564 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "22a0bada-8656-451c-af9e-743901138320" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.565 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "22a0bada-8656-451c-af9e-743901138320-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.565 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.566 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.568 185484 INFO nova.compute.manager [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Terminating instance
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.569 185484 DEBUG nova.compute.manager [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:28:22 compute-0 kernel: tap30c6cddc-b9 (unregistering): left promiscuous mode
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.603 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.605 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.617 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 NetworkManager[56191]: <info>  [1769542102.6339] device (tap30c6cddc-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 19:28:22 compute-0 ovn_controller[97647]: 2026-01-27T19:28:22Z|00088|binding|INFO|Releasing lport 30c6cddc-b940-43d0-9889-aa0f1836eb27 from this chassis (sb_readonly=0)
Jan 27 19:28:22 compute-0 ovn_controller[97647]: 2026-01-27T19:28:22Z|00089|binding|INFO|Setting lport 30c6cddc-b940-43d0-9889-aa0f1836eb27 down in Southbound
Jan 27 19:28:22 compute-0 ovn_controller[97647]: 2026-01-27T19:28:22Z|00090|binding|INFO|Removing iface tap30c6cddc-b9 ovn-installed in OVS
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.640 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.643 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.661 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:0a:d1 10.100.0.14'], port_security=['fa:16:3e:0c:0a:d1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '22a0bada-8656-451c-af9e-743901138320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73cc7e581727408e9ec8407b9dbf203f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf6281c1-6104-4edf-92bd-c5d51eb4346e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=617c4bbb-d204-433c-848d-6e2c41736c50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=30c6cddc-b940-43d0-9889-aa0f1836eb27) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.666 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 30c6cddc-b940-43d0-9889-aa0f1836eb27 in datapath 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c unbound from our chassis
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.670 106898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.672 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1fa9d5-a4f9-446a-b409-bf510064c716]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.675 106898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c namespace which is not needed anymore
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.678 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 27 19:28:22 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 9.303s CPU time.
Jan 27 19:28:22 compute-0 systemd-machined[156762]: Machine qemu-8-instance-00000008 terminated.
Jan 27 19:28:22 compute-0 kernel: tap30c6cddc-b9: entered promiscuous mode
Jan 27 19:28:22 compute-0 NetworkManager[56191]: <info>  [1769542102.7994] manager: (tap30c6cddc-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Jan 27 19:28:22 compute-0 systemd-udevd[249230]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:28:22 compute-0 kernel: tap30c6cddc-b9 (unregistering): left promiscuous mode
Jan 27 19:28:22 compute-0 ovn_controller[97647]: 2026-01-27T19:28:22Z|00091|binding|INFO|Claiming lport 30c6cddc-b940-43d0-9889-aa0f1836eb27 for this chassis.
Jan 27 19:28:22 compute-0 ovn_controller[97647]: 2026-01-27T19:28:22Z|00092|binding|INFO|30c6cddc-b940-43d0-9889-aa0f1836eb27: Claiming fa:16:3e:0c:0a:d1 10.100.0.14
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.801 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.817 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:0a:d1 10.100.0.14'], port_security=['fa:16:3e:0c:0a:d1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '22a0bada-8656-451c-af9e-743901138320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73cc7e581727408e9ec8407b9dbf203f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf6281c1-6104-4edf-92bd-c5d51eb4346e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=617c4bbb-d204-433c-848d-6e2c41736c50, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=30c6cddc-b940-43d0-9889-aa0f1836eb27) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.835 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 ovn_controller[97647]: 2026-01-27T19:28:22Z|00093|binding|INFO|Releasing lport 30c6cddc-b940-43d0-9889-aa0f1836eb27 from this chassis (sb_readonly=0)
Jan 27 19:28:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:22.843 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:0a:d1 10.100.0.14'], port_security=['fa:16:3e:0c:0a:d1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '22a0bada-8656-451c-af9e-743901138320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73cc7e581727408e9ec8407b9dbf203f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf6281c1-6104-4edf-92bd-c5d51eb4346e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=617c4bbb-d204-433c-848d-6e2c41736c50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=30c6cddc-b940-43d0-9889-aa0f1836eb27) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.872 185484 INFO nova.virt.libvirt.driver [-] [instance: 22a0bada-8656-451c-af9e-743901138320] Instance destroyed successfully.
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.873 185484 DEBUG nova.objects.instance [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lazy-loading 'resources' on Instance uuid 22a0bada-8656-451c-af9e-743901138320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.890 185484 DEBUG nova.virt.libvirt.vif [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:27:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-215967915',display_name='tempest-ServersTestJSON-server-215967915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-215967915',id=8,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM+QmMLcjSlAuSHlxXPE5hwyIk6QDdNHwRQEPXpyKx1YKoNRY0mNo/wl7iHULAMPj6jIL/1KYvwvCBBPc4VqXQWK9yCFEblr3fx+0cAIeShxFAwnGVT9IeM3bwAWEjWltQ==',key_name='tempest-keypair-517750285',keypairs=<?>,launch_index=0,launched_at=2026-01-27T19:28:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='73cc7e581727408e9ec8407b9dbf203f',ramdisk_id='',reservation_id='r-8optdkg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-854293747',owner_user_name='tempest-ServersTestJSON-854293747-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T19:28:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='edcee212fadb4bbf9267ca664aca5ec7',uuid=22a0bada-8656-451c-af9e-743901138320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.891 185484 DEBUG nova.network.os_vif_util [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Converting VIF {"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.892 185484 DEBUG nova.network.os_vif_util [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:0a:d1,bridge_name='br-int',has_traffic_filtering=True,id=30c6cddc-b940-43d0-9889-aa0f1836eb27,network=Network(5d551bfa-9f0b-4c1d-8b77-db9b22bd243c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c6cddc-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.893 185484 DEBUG os_vif [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:0a:d1,bridge_name='br-int',has_traffic_filtering=True,id=30c6cddc-b940-43d0-9889-aa0f1836eb27,network=Network(5d551bfa-9f0b-4c1d-8b77-db9b22bd243c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c6cddc-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.895 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.896 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30c6cddc-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.897 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.899 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.902 185484 INFO os_vif [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:0a:d1,bridge_name='br-int',has_traffic_filtering=True,id=30c6cddc-b940-43d0-9889-aa0f1836eb27,network=Network(5d551bfa-9f0b-4c1d-8b77-db9b22bd243c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30c6cddc-b9')
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.904 185484 INFO nova.virt.libvirt.driver [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Deleting instance files /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320_del
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.905 185484 INFO nova.virt.libvirt.driver [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Deletion of /var/lib/nova/instances/22a0bada-8656-451c-af9e-743901138320_del complete
Jan 27 19:28:22 compute-0 neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c[249182]: [NOTICE]   (249194) : haproxy version is 2.8.14-c23fe91
Jan 27 19:28:22 compute-0 neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c[249182]: [NOTICE]   (249194) : path to executable is /usr/sbin/haproxy
Jan 27 19:28:22 compute-0 neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c[249182]: [WARNING]  (249194) : Exiting Master process...
Jan 27 19:28:22 compute-0 neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c[249182]: [WARNING]  (249194) : Exiting Master process...
Jan 27 19:28:22 compute-0 neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c[249182]: [ALERT]    (249194) : Current worker (249196) exited with code 143 (Terminated)
Jan 27 19:28:22 compute-0 neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c[249182]: [WARNING]  (249194) : All workers exited. Exiting... (0)
Jan 27 19:28:22 compute-0 systemd[1]: libpod-1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963.scope: Deactivated successfully.
Jan 27 19:28:22 compute-0 conmon[249182]: conmon 1d283332f19295c4f6ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963.scope/container/memory.events
Jan 27 19:28:22 compute-0 podman[249254]: 2026-01-27 19:28:22.947750521 +0000 UTC m=+0.086682934 container died 1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.985 185484 INFO nova.compute.manager [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Took 0.42 seconds to destroy the instance on the hypervisor.
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.986 185484 DEBUG oslo.service.loopingcall [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.987 185484 DEBUG nova.compute.manager [-] [instance: 22a0bada-8656-451c-af9e-743901138320] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:28:22 compute-0 nova_compute[185480]: 2026-01-27 19:28:22.988 185484 DEBUG nova.network.neutron [-] [instance: 22a0bada-8656-451c-af9e-743901138320] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963-userdata-shm.mount: Deactivated successfully.
Jan 27 19:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b010637e3240e4eba0a62db66ff7a624cda742a873c8d22c6e7cf3b1dc220b9-merged.mount: Deactivated successfully.
Jan 27 19:28:23 compute-0 podman[249254]: 2026-01-27 19:28:23.007844584 +0000 UTC m=+0.146776987 container cleanup 1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:28:23 compute-0 systemd[1]: libpod-conmon-1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963.scope: Deactivated successfully.
Jan 27 19:28:23 compute-0 podman[249287]: 2026-01-27 19:28:23.094834475 +0000 UTC m=+0.052500757 container remove 1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.119 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf63126-8ca4-49db-b783-f190f8508813]: (4, ('Tue Jan 27 07:28:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c (1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963)\n1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963\nTue Jan 27 07:28:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c (1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963)\n1d283332f19295c4f6ce4f90e62d06cb8b0150f75f543503484f0cee68929963\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.122 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[40ba3b4c-6990-435e-9bae-50aebfc37e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.123 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d551bfa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.125 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:23 compute-0 kernel: tap5d551bfa-90: left promiscuous mode
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.140 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.143 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e92eb6-3d84-4eb8-b798-f9609e7c58a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.157 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[7a89d9cd-71b8-4b2f-947d-a0d33f0418e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.160 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[a06c678d-7fee-4dfd-ab34-c88fdabef597]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.184 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[2453b804-90a9-4888-a842-17dbed5a49c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520861, 'reachable_time': 28471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249302, 'error': None, 'target': 'ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.189 107353 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d551bfa-9f0b-4c1d-8b77-db9b22bd243c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.189 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[0e03ea36-f76d-4522-b44b-3ab6db4f9a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d551bfa\x2d9f0b\x2d4c1d\x2d8b77\x2ddb9b22bd243c.mount: Deactivated successfully.
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.191 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 30c6cddc-b940-43d0-9889-aa0f1836eb27 in datapath 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c unbound from our chassis
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.193 106898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.195 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[df638b12-a24b-4a9b-9f95-6e44fbb2eecb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.196 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 30c6cddc-b940-43d0-9889-aa0f1836eb27 in datapath 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c unbound from our chassis
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.201 106898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d551bfa-9f0b-4c1d-8b77-db9b22bd243c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 19:28:23 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:23.203 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[16bb432a-f164-4648-a081-79d504a7d19d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.267 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.388 185484 DEBUG nova.network.neutron [req-06869fc3-ca14-4239-a8e1-2f8e898cfab1 req-45b1df1b-e4be-412a-95bc-f8e459ebb97f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Updated VIF entry in instance network info cache for port 30c6cddc-b940-43d0-9889-aa0f1836eb27. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.388 185484 DEBUG nova.network.neutron [req-06869fc3-ca14-4239-a8e1-2f8e898cfab1 req-45b1df1b-e4be-412a-95bc-f8e459ebb97f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Updating instance_info_cache with network_info: [{"id": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "address": "fa:16:3e:0c:0a:d1", "network": {"id": "5d551bfa-9f0b-4c1d-8b77-db9b22bd243c", "bridge": "br-int", "label": "tempest-ServersTestJSON-875466886-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73cc7e581727408e9ec8407b9dbf203f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30c6cddc-b9", "ovs_interfaceid": "30c6cddc-b940-43d0-9889-aa0f1836eb27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.424 185484 DEBUG oslo_concurrency.lockutils [req-06869fc3-ca14-4239-a8e1-2f8e898cfab1 req-45b1df1b-e4be-412a-95bc-f8e459ebb97f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-22a0bada-8656-451c-af9e-743901138320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.471 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769542088.4693515, 87df2e63-6d90-4f9b-9c89-e3156bc11b8b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.472 185484 INFO nova.compute.manager [-] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] VM Stopped (Lifecycle Event)
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.501 185484 DEBUG nova.compute.manager [None req-ffe9b538-fa3d-42dc-9bdd-c481b97d7be6 - - - - - -] [instance: 87df2e63-6d90-4f9b-9c89-e3156bc11b8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.574 185484 DEBUG nova.compute.manager [req-76b18703-9e39-4ece-9d78-518d6de4b226 req-fecc8907-386b-4dd9-be24-8692a806643a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received event network-vif-unplugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.575 185484 DEBUG oslo_concurrency.lockutils [req-76b18703-9e39-4ece-9d78-518d6de4b226 req-fecc8907-386b-4dd9-be24-8692a806643a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "22a0bada-8656-451c-af9e-743901138320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.575 185484 DEBUG oslo_concurrency.lockutils [req-76b18703-9e39-4ece-9d78-518d6de4b226 req-fecc8907-386b-4dd9-be24-8692a806643a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.575 185484 DEBUG oslo_concurrency.lockutils [req-76b18703-9e39-4ece-9d78-518d6de4b226 req-fecc8907-386b-4dd9-be24-8692a806643a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.576 185484 DEBUG nova.compute.manager [req-76b18703-9e39-4ece-9d78-518d6de4b226 req-fecc8907-386b-4dd9-be24-8692a806643a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] No waiting events found dispatching network-vif-unplugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.576 185484 DEBUG nova.compute.manager [req-76b18703-9e39-4ece-9d78-518d6de4b226 req-fecc8907-386b-4dd9-be24-8692a806643a bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received event network-vif-unplugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 19:28:23 compute-0 ovn_controller[97647]: 2026-01-27T19:28:23Z|00094|binding|INFO|Releasing lport a1115b00-0e91-4461-b1a5-9507c6ba36f8 from this chassis (sb_readonly=0)
Jan 27 19:28:23 compute-0 nova_compute[185480]: 2026-01-27 19:28:23.677 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:25 compute-0 podman[249303]: 2026-01-27 19:28:25.289985155 +0000 UTC m=+0.072660371 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 19:28:25 compute-0 nova_compute[185480]: 2026-01-27 19:28:25.839 185484 DEBUG nova.compute.manager [req-e3cb1546-3547-4f61-b177-5022d69a8201 req-b0e3bf51-f3d2-4bf5-86b8-e83e39b65004 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received event network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:25 compute-0 nova_compute[185480]: 2026-01-27 19:28:25.840 185484 DEBUG oslo_concurrency.lockutils [req-e3cb1546-3547-4f61-b177-5022d69a8201 req-b0e3bf51-f3d2-4bf5-86b8-e83e39b65004 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "22a0bada-8656-451c-af9e-743901138320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:25 compute-0 nova_compute[185480]: 2026-01-27 19:28:25.841 185484 DEBUG oslo_concurrency.lockutils [req-e3cb1546-3547-4f61-b177-5022d69a8201 req-b0e3bf51-f3d2-4bf5-86b8-e83e39b65004 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:25 compute-0 nova_compute[185480]: 2026-01-27 19:28:25.842 185484 DEBUG oslo_concurrency.lockutils [req-e3cb1546-3547-4f61-b177-5022d69a8201 req-b0e3bf51-f3d2-4bf5-86b8-e83e39b65004 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "22a0bada-8656-451c-af9e-743901138320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:25 compute-0 nova_compute[185480]: 2026-01-27 19:28:25.843 185484 DEBUG nova.compute.manager [req-e3cb1546-3547-4f61-b177-5022d69a8201 req-b0e3bf51-f3d2-4bf5-86b8-e83e39b65004 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] No waiting events found dispatching network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:28:25 compute-0 nova_compute[185480]: 2026-01-27 19:28:25.843 185484 WARNING nova.compute.manager [req-e3cb1546-3547-4f61-b177-5022d69a8201 req-b0e3bf51-f3d2-4bf5-86b8-e83e39b65004 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received unexpected event network-vif-plugged-30c6cddc-b940-43d0-9889-aa0f1836eb27 for instance with vm_state active and task_state deleting.
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.557 185484 DEBUG nova.network.neutron [-] [instance: 22a0bada-8656-451c-af9e-743901138320] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.588 185484 INFO nova.compute.manager [-] [instance: 22a0bada-8656-451c-af9e-743901138320] Took 3.60 seconds to deallocate network for instance.
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.662 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.663 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.781 185484 DEBUG nova.compute.provider_tree [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.806 185484 DEBUG nova.scheduler.client.report [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.834 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.861 185484 INFO nova.scheduler.client.report [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Deleted allocations for instance 22a0bada-8656-451c-af9e-743901138320
Jan 27 19:28:26 compute-0 nova_compute[185480]: 2026-01-27 19:28:26.986 185484 DEBUG nova.compute.manager [req-e9d59e56-140c-4d38-99ed-c0f03e6b4405 req-4868226c-f6c3-4a43-ae86-ee393f1056aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 22a0bada-8656-451c-af9e-743901138320] Received event network-vif-deleted-30c6cddc-b940-43d0-9889-aa0f1836eb27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:28:27 compute-0 nova_compute[185480]: 2026-01-27 19:28:27.126 185484 DEBUG oslo_concurrency.lockutils [None req-d8caa10f-112e-402a-9732-d7b924a022c8 edcee212fadb4bbf9267ca664aca5ec7 73cc7e581727408e9ec8407b9dbf203f - - default default] Lock "22a0bada-8656-451c-af9e-743901138320" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:27 compute-0 nova_compute[185480]: 2026-01-27 19:28:27.900 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:28 compute-0 nova_compute[185480]: 2026-01-27 19:28:28.270 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:28 compute-0 podman[249322]: 2026-01-27 19:28:28.319530588 +0000 UTC m=+0.092439906 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:28:28 compute-0 podman[249324]: 2026-01-27 19:28:28.339052106 +0000 UTC m=+0.098575437 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 19:28:28 compute-0 podman[249323]: 2026-01-27 19:28:28.343022103 +0000 UTC m=+0.117929590 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 27 19:28:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:28:29.608 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:28:29 compute-0 podman[201378]: time="2026-01-27T19:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:28:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:28:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 27 19:28:31 compute-0 openstack_network_exporter[204477]: ERROR   19:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:28:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:28:31 compute-0 openstack_network_exporter[204477]: ERROR   19:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:28:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.103 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.104 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.111 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance be961482-e05a-4655-96ea-7d4810738a3c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d81125d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:28:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:32.112 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/be961482-e05a-4655-96ea-7d4810738a3c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:28:32 compute-0 nova_compute[185480]: 2026-01-27 19:28:32.904 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.166 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1997 Content-Type: application/json Date: Tue, 27 Jan 2026 19:28:32 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-07c5df5c-03fc-412a-96e9-4f318a40d1e3 x-openstack-request-id: req-07c5df5c-03fc-412a-96e9-4f318a40d1e3 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.166 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "be961482-e05a-4655-96ea-7d4810738a3c", "name": "tempest-AttachInterfacesUnderV243Test-server-1779555975", "status": "ACTIVE", "tenant_id": "4102701402ff4f059dd67182960e5b64", "user_id": "87274dd877fa476fa885a5665ba052ec", "metadata": {}, "hostId": "ef251b07c1f759aa25a99941da82b9cedb6c0082dab5e4896c0339a5", "image": {"id": "729797c6-2677-44bd-a4a8-949d1f57b0a2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/729797c6-2677-44bd-a4a8-949d1f57b0a2"}]}, "flavor": {"id": "49f81b8c-e0df-4a53-87c6-69576be59651", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/49f81b8c-e0df-4a53-87c6-69576be59651"}]}, "created": "2026-01-27T19:27:52Z", "updated": "2026-01-27T19:28:04Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1659482718-network": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:60:a3"}, {"version": 4, "addr": "192.168.122.203", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:60:a3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/be961482-e05a-4655-96ea-7d4810738a3c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/be961482-e05a-4655-96ea-7d4810738a3c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1143519939", "OS-SRV-USG:launched_at": "2026-01-27T19:28:04.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--294283955"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.166 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/be961482-e05a-4655-96ea-7d4810738a3c used request id req-07c5df5c-03fc-412a-96e9-4f318a40d1e3 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.168 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'be961482-e05a-4655-96ea-7d4810738a3c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1779555975', 'flavor': {'id': '49f81b8c-e0df-4a53-87c6-69576be59651', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4102701402ff4f059dd67182960e5b64', 'user_id': '87274dd877fa476fa885a5665ba052ec', 'hostId': 'ef251b07c1f759aa25a99941da82b9cedb6c0082dab5e4896c0339a5', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.169 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.169 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:28:33.170295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.199 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.200 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.201 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:28:33.201520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.232 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.233 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance be961482-e05a-4655-96ea-7d4810738a3c: ceilometer.compute.pollsters.NoVolumeException
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.233 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.233 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.234 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.234 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.235 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:28:33.234619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.236 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.236 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.236 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.236 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:28:33.237371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.241 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for be961482-e05a-4655-96ea-7d4810738a3c / tapb0ee253c-f6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.242 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.243 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.243 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:28:33.243393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 nova_compute[185480]: 2026-01-27 19:28:33.275 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.287 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.288 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.289 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.289 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:28:33.289470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.290 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.291 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:28:33.290717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.291 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.292 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:28:33.291933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.292 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.293 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.293 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/cpu volume: 27910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:28:33.293509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.294 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:28:33.294700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.295 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.296 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.297 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.297 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.297 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.297 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.297 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:28:33.297413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.298 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.read.latency volume: 1471123078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:28:33.298707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.299 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.read.latency volume: 3253250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.299 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.300 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:28:33.300183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.301 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.301 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:28:33.301379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.302 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.302 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:28:33.302580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.303 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.304 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:28:33.304069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.304 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.305 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:28:33.305197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.305 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.306 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.307 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.307 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:28:33.306590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.308 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:28:33.307801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.308 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:28:33.309130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.309 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.310 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T19:28:33.310254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.310 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1779555975>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1779555975>]
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.311 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:28:33.311374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.312 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:28:33.312510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.313 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.313 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.313 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.314 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.315 14 DEBUG ceilometer.compute.pollsters [-] be961482-e05a-4655-96ea-7d4810738a3c/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:28:33.313510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:28:33.315073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.316 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.316 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.316 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1779555975>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1779555975>]
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T19:28:33.316344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:28:33.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:28:34 compute-0 nova_compute[185480]: 2026-01-27 19:28:34.172 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:37 compute-0 ovn_controller[97647]: 2026-01-27T19:28:37Z|00095|binding|INFO|Releasing lport a1115b00-0e91-4461-b1a5-9507c6ba36f8 from this chassis (sb_readonly=0)
Jan 27 19:28:37 compute-0 nova_compute[185480]: 2026-01-27 19:28:37.346 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:37 compute-0 nova_compute[185480]: 2026-01-27 19:28:37.867 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769542102.8662446, 22a0bada-8656-451c-af9e-743901138320 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:28:37 compute-0 nova_compute[185480]: 2026-01-27 19:28:37.868 185484 INFO nova.compute.manager [-] [instance: 22a0bada-8656-451c-af9e-743901138320] VM Stopped (Lifecycle Event)
Jan 27 19:28:37 compute-0 nova_compute[185480]: 2026-01-27 19:28:37.906 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:37 compute-0 nova_compute[185480]: 2026-01-27 19:28:37.910 185484 DEBUG nova.compute.manager [None req-695ab002-f550-479e-a795-6e5fd06893f9 - - - - - -] [instance: 22a0bada-8656-451c-af9e-743901138320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:28:38 compute-0 nova_compute[185480]: 2026-01-27 19:28:38.278 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:39 compute-0 ovn_controller[97647]: 2026-01-27T19:28:39Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:60:a3 10.100.0.13
Jan 27 19:28:39 compute-0 ovn_controller[97647]: 2026-01-27T19:28:39Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:60:a3 10.100.0.13
Jan 27 19:28:41 compute-0 podman[249401]: 2026-01-27 19:28:41.381201622 +0000 UTC m=+0.138378131 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:28:41 compute-0 podman[249402]: 2026-01-27 19:28:41.392029557 +0000 UTC m=+0.143126677 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 19:28:41 compute-0 nova_compute[185480]: 2026-01-27 19:28:41.979 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:42 compute-0 nova_compute[185480]: 2026-01-27 19:28:42.910 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:43 compute-0 nova_compute[185480]: 2026-01-27 19:28:43.282 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:43 compute-0 podman[249440]: 2026-01-27 19:28:43.357266276 +0000 UTC m=+0.125739162 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, config_id=kepler, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0)
Jan 27 19:28:47 compute-0 nova_compute[185480]: 2026-01-27 19:28:47.917 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:48 compute-0 nova_compute[185480]: 2026-01-27 19:28:48.285 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:49 compute-0 podman[249457]: 2026-01-27 19:28:49.314396061 +0000 UTC m=+0.095511751 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 19:28:51 compute-0 nova_compute[185480]: 2026-01-27 19:28:51.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:51 compute-0 nova_compute[185480]: 2026-01-27 19:28:51.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:28:51 compute-0 nova_compute[185480]: 2026-01-27 19:28:51.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:28:51 compute-0 nova_compute[185480]: 2026-01-27 19:28:51.800 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:28:51 compute-0 nova_compute[185480]: 2026-01-27 19:28:51.800 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:28:51 compute-0 nova_compute[185480]: 2026-01-27 19:28:51.801 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:28:51 compute-0 nova_compute[185480]: 2026-01-27 19:28:51.801 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid be961482-e05a-4655-96ea-7d4810738a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:28:52 compute-0 nova_compute[185480]: 2026-01-27 19:28:52.741 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:52 compute-0 nova_compute[185480]: 2026-01-27 19:28:52.920 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:53 compute-0 nova_compute[185480]: 2026-01-27 19:28:53.288 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:53 compute-0 nova_compute[185480]: 2026-01-27 19:28:53.741 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:28:53 compute-0 nova_compute[185480]: 2026-01-27 19:28:53.765 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:28:53 compute-0 nova_compute[185480]: 2026-01-27 19:28:53.766 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:28:54 compute-0 nova_compute[185480]: 2026-01-27 19:28:54.479 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Acquiring lock "f79ddcc5-ee21-43e8-9d0d-60476a477361" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:54 compute-0 nova_compute[185480]: 2026-01-27 19:28:54.480 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "f79ddcc5-ee21-43e8-9d0d-60476a477361" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:54 compute-0 nova_compute[185480]: 2026-01-27 19:28:54.709 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:28:54 compute-0 nova_compute[185480]: 2026-01-27 19:28:54.998 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.000 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.015 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.016 185484 INFO nova.compute.claims [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.515 185484 DEBUG nova.compute.provider_tree [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.580 185484 DEBUG nova.scheduler.client.report [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.653 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.654 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.872 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.873 185484 DEBUG nova.network.neutron [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.924 185484 INFO nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:28:55 compute-0 nova_compute[185480]: 2026-01-27 19:28:55.962 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.153 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.154 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.155 185484 INFO nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Creating image(s)
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.156 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Acquiring lock "/var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.156 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "/var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.157 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "/var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.172 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.250 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.252 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Acquiring lock "69104a6fcf619df4c492b27a202c23b5821c0e32" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.252 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.273 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.333 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:56 compute-0 podman[249479]: 2026-01-27 19:28:56.33581342 +0000 UTC m=+0.110037146 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126)
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.335 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.373 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.374 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.375 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.455 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.457 185484 DEBUG nova.virt.disk.api [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Checking if we can resize image /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.458 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.554 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.555 185484 DEBUG nova.virt.disk.api [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Cannot resize image /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.556 185484 DEBUG nova.objects.instance [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lazy-loading 'migration_context' on Instance uuid f79ddcc5-ee21-43e8-9d0d-60476a477361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.678 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.679 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Ensure instance console log exists: /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.679 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.679 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:28:56 compute-0 nova_compute[185480]: 2026-01-27 19:28:56.680 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:28:57 compute-0 nova_compute[185480]: 2026-01-27 19:28:57.821 185484 DEBUG nova.policy [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '79532101c66342a980a90799ac41a442', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '50b0e23834964280a34973a87d80d1b8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 19:28:57 compute-0 nova_compute[185480]: 2026-01-27 19:28:57.925 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:58 compute-0 nova_compute[185480]: 2026-01-27 19:28:58.291 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:28:59 compute-0 podman[249511]: 2026-01-27 19:28:59.338644038 +0000 UTC m=+0.097719856 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:28:59 compute-0 podman[249513]: 2026-01-27 19:28:59.359466608 +0000 UTC m=+0.117089160 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:28:59 compute-0 podman[249512]: 2026-01-27 19:28:59.359845427 +0000 UTC m=+0.122666327 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 19:28:59 compute-0 nova_compute[185480]: 2026-01-27 19:28:59.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:28:59 compute-0 podman[201378]: time="2026-01-27T19:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:28:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:28:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 27 19:29:01 compute-0 openstack_network_exporter[204477]: ERROR   19:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:29:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:29:01 compute-0 openstack_network_exporter[204477]: ERROR   19:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:29:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:29:02 compute-0 nova_compute[185480]: 2026-01-27 19:29:02.169 185484 DEBUG nova.network.neutron [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Successfully created port: eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 19:29:02 compute-0 nova_compute[185480]: 2026-01-27 19:29:02.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:29:02 compute-0 nova_compute[185480]: 2026-01-27 19:29:02.836 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:02 compute-0 nova_compute[185480]: 2026-01-27 19:29:02.836 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:02 compute-0 nova_compute[185480]: 2026-01-27 19:29:02.837 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:02 compute-0 nova_compute[185480]: 2026-01-27 19:29:02.837 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:29:02 compute-0 nova_compute[185480]: 2026-01-27 19:29:02.930 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.016 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.118 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.119 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.186 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.293 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.697 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.699 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=72.35123062133789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.699 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.700 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.826 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance be961482-e05a-4655-96ea-7d4810738a3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.826 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance f79ddcc5-ee21-43e8-9d0d-60476a477361 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.826 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.827 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:29:03 compute-0 nova_compute[185480]: 2026-01-27 19:29:03.893 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:29:04 compute-0 nova_compute[185480]: 2026-01-27 19:29:04.049 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:29:04 compute-0 nova_compute[185480]: 2026-01-27 19:29:04.095 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:29:04 compute-0 nova_compute[185480]: 2026-01-27 19:29:04.096 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:05 compute-0 nova_compute[185480]: 2026-01-27 19:29:05.091 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:29:05 compute-0 nova_compute[185480]: 2026-01-27 19:29:05.091 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:29:05 compute-0 nova_compute[185480]: 2026-01-27 19:29:05.092 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:29:06 compute-0 nova_compute[185480]: 2026-01-27 19:29:06.518 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:29:06 compute-0 nova_compute[185480]: 2026-01-27 19:29:06.962 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.016 185484 DEBUG nova.network.neutron [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Successfully updated port: eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.181 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Acquiring lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.182 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Acquired lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.183 185484 DEBUG nova.network.neutron [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.320 185484 DEBUG nova.compute.manager [req-c60a1a9a-0872-4e25-8429-7befeb56b59f req-30f754df-1dcf-4bda-bfc7-3c3a1014b549 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Received event network-changed-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.321 185484 DEBUG nova.compute.manager [req-c60a1a9a-0872-4e25-8429-7befeb56b59f req-30f754df-1dcf-4bda-bfc7-3c3a1014b549 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Refreshing instance network info cache due to event network-changed-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.322 185484 DEBUG oslo_concurrency.lockutils [req-c60a1a9a-0872-4e25-8429-7befeb56b59f req-30f754df-1dcf-4bda-bfc7-3c3a1014b549 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.626 185484 DEBUG nova.network.neutron [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:29:07 compute-0 nova_compute[185480]: 2026-01-27 19:29:07.934 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:08 compute-0 nova_compute[185480]: 2026-01-27 19:29:08.298 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:08 compute-0 nova_compute[185480]: 2026-01-27 19:29:08.432 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:08 compute-0 nova_compute[185480]: 2026-01-27 19:29:08.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:29:09 compute-0 nova_compute[185480]: 2026-01-27 19:29:09.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.482 185484 DEBUG nova.network.neutron [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updating instance_info_cache with network_info: [{"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.616 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Releasing lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.616 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Instance network_info: |[{"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.617 185484 DEBUG oslo_concurrency.lockutils [req-c60a1a9a-0872-4e25-8429-7befeb56b59f req-30f754df-1dcf-4bda-bfc7-3c3a1014b549 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.617 185484 DEBUG nova.network.neutron [req-c60a1a9a-0872-4e25-8429-7befeb56b59f req-30f754df-1dcf-4bda-bfc7-3c3a1014b549 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Refreshing network info cache for port eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.620 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Start _get_guest_xml network_info=[{"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.630 185484 WARNING nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.640 185484 DEBUG nova.virt.libvirt.host [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.641 185484 DEBUG nova.virt.libvirt.host [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.651 185484 DEBUG nova.virt.libvirt.host [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.652 185484 DEBUG nova.virt.libvirt.host [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.652 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.652 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:26:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='49f81b8c-e0df-4a53-87c6-69576be59651',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.653 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.653 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.654 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.655 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.655 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.656 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.656 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.657 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.657 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.658 185484 DEBUG nova.virt.hardware [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.662 185484 DEBUG nova.virt.libvirt.vif [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:28:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1826919436',display_name='tempest-ServerActionsTestJSON-server-1826919436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1826919436',id=9,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEarGQd886D0SbFKhVK8HYo71HIcxMpg6dPpVFammXr8EYi6IreecvuL3x3g5cO3e3t+GK1ndQQDhKRm+MPwxrmFEu4Je8Iz/i+X+gFHwYaMIMzTNGwxg61NLpSgA4dHQA==',key_name='tempest-keypair-1789477663',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='50b0e23834964280a34973a87d80d1b8',ramdisk_id='',reservation_id='r-q3q20g65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-741151556',owner_user_name='tempest-ServerActionsTestJSON-741151556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:28:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='79532101c66342a980a90799ac41a442',uuid=f79ddcc5-ee21-43e8-9d0d-60476a477361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.663 185484 DEBUG nova.network.os_vif_util [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Converting VIF {"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.664 185484 DEBUG nova.network.os_vif_util [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=eb2f2dfe-3f62-4bba-8586-84ee449f5ae5,network=Network(47175578-eb32-4720-93c5-05fa0d34701f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2f2dfe-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.665 185484 DEBUG nova.objects.instance [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid f79ddcc5-ee21-43e8-9d0d-60476a477361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.831 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <uuid>f79ddcc5-ee21-43e8-9d0d-60476a477361</uuid>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <name>instance-00000009</name>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <memory>131072</memory>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <nova:name>tempest-ServerActionsTestJSON-server-1826919436</nova:name>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:29:11</nova:creationTime>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <nova:flavor name="m1.nano">
Jan 27 19:29:11 compute-0 nova_compute[185480]:         <nova:memory>128</nova:memory>
Jan 27 19:29:11 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:29:11 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:29:11 compute-0 nova_compute[185480]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 19:29:11 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:29:11 compute-0 nova_compute[185480]:         <nova:user uuid="79532101c66342a980a90799ac41a442">tempest-ServerActionsTestJSON-741151556-project-member</nova:user>
Jan 27 19:29:11 compute-0 nova_compute[185480]:         <nova:project uuid="50b0e23834964280a34973a87d80d1b8">tempest-ServerActionsTestJSON-741151556</nova:project>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="729797c6-2677-44bd-a4a8-949d1f57b0a2"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:29:11 compute-0 nova_compute[185480]:         <nova:port uuid="eb2f2dfe-3f62-4bba-8586-84ee449f5ae5">
Jan 27 19:29:11 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <system>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <entry name="serial">f79ddcc5-ee21-43e8-9d0d-60476a477361</entry>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <entry name="uuid">f79ddcc5-ee21-43e8-9d0d-60476a477361</entry>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </system>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <os>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   </os>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <features>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   </features>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.config"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:de:a4:15"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <target dev="tapeb2f2dfe-3f"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/console.log" append="off"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <video>
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </video>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:29:11 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:29:11 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:29:11 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:29:11 compute-0 nova_compute[185480]: </domain>
Jan 27 19:29:11 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.833 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Preparing to wait for external event network-vif-plugged-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.834 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Acquiring lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.835 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.836 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.838 185484 DEBUG nova.virt.libvirt.vif [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:28:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1826919436',display_name='tempest-ServerActionsTestJSON-server-1826919436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1826919436',id=9,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEarGQd886D0SbFKhVK8HYo71HIcxMpg6dPpVFammXr8EYi6IreecvuL3x3g5cO3e3t+GK1ndQQDhKRm+MPwxrmFEu4Je8Iz/i+X+gFHwYaMIMzTNGwxg61NLpSgA4dHQA==',key_name='tempest-keypair-1789477663',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='50b0e23834964280a34973a87d80d1b8',ramdisk_id='',reservation_id='r-q3q20g65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-741151556',owner_user_name='tempest-ServerActionsTestJSON-741151556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:28:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='79532101c66342a980a90799ac41a442',uuid=f79ddcc5-ee21-43e8-9d0d-60476a477361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.839 185484 DEBUG nova.network.os_vif_util [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Converting VIF {"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.841 185484 DEBUG nova.network.os_vif_util [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=eb2f2dfe-3f62-4bba-8586-84ee449f5ae5,network=Network(47175578-eb32-4720-93c5-05fa0d34701f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2f2dfe-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.842 185484 DEBUG os_vif [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=eb2f2dfe-3f62-4bba-8586-84ee449f5ae5,network=Network(47175578-eb32-4720-93c5-05fa0d34701f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2f2dfe-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.843 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.844 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.845 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.850 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.850 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb2f2dfe-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.851 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb2f2dfe-3f, col_values=(('external_ids', {'iface-id': 'eb2f2dfe-3f62-4bba-8586-84ee449f5ae5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:a4:15', 'vm-uuid': 'f79ddcc5-ee21-43e8-9d0d-60476a477361'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.853 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:11 compute-0 NetworkManager[56191]: <info>  [1769542151.8557] manager: (tapeb2f2dfe-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.858 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.865 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:11 compute-0 nova_compute[185480]: 2026-01-27 19:29:11.866 185484 INFO os_vif [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:a4:15,bridge_name='br-int',has_traffic_filtering=True,id=eb2f2dfe-3f62-4bba-8586-84ee449f5ae5,network=Network(47175578-eb32-4720-93c5-05fa0d34701f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2f2dfe-3f')
Jan 27 19:29:12 compute-0 nova_compute[185480]: 2026-01-27 19:29:12.039 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:29:12 compute-0 nova_compute[185480]: 2026-01-27 19:29:12.039 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:29:12 compute-0 nova_compute[185480]: 2026-01-27 19:29:12.040 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] No VIF found with MAC fa:16:3e:de:a4:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:29:12 compute-0 nova_compute[185480]: 2026-01-27 19:29:12.041 185484 INFO nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Using config drive
Jan 27 19:29:12 compute-0 podman[249582]: 2026-01-27 19:29:12.365454057 +0000 UTC m=+0.129330530 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 27 19:29:12 compute-0 podman[249581]: 2026-01-27 19:29:12.38701631 +0000 UTC m=+0.150498962 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:29:12 compute-0 nova_compute[185480]: 2026-01-27 19:29:12.878 185484 INFO nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Creating config drive at /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.config
Jan 27 19:29:12 compute-0 nova_compute[185480]: 2026-01-27 19:29:12.890 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppdcvv0uz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.027 185484 DEBUG oslo_concurrency.processutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppdcvv0uz" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:13 compute-0 kernel: tapeb2f2dfe-3f: entered promiscuous mode
Jan 27 19:29:13 compute-0 NetworkManager[56191]: <info>  [1769542153.1192] manager: (tapeb2f2dfe-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.122 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:13 compute-0 ovn_controller[97647]: 2026-01-27T19:29:13Z|00096|binding|INFO|Claiming lport eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 for this chassis.
Jan 27 19:29:13 compute-0 ovn_controller[97647]: 2026-01-27T19:29:13Z|00097|binding|INFO|eb2f2dfe-3f62-4bba-8586-84ee449f5ae5: Claiming fa:16:3e:de:a4:15 10.100.0.8
Jan 27 19:29:13 compute-0 ovn_controller[97647]: 2026-01-27T19:29:13Z|00098|binding|INFO|Setting lport eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 ovn-installed in OVS
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.146 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.158 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:13 compute-0 systemd-udevd[249638]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:29:13 compute-0 systemd-machined[156762]: New machine qemu-9-instance-00000009.
Jan 27 19:29:13 compute-0 NetworkManager[56191]: <info>  [1769542153.1867] device (tapeb2f2dfe-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:29:13 compute-0 NetworkManager[56191]: <info>  [1769542153.1875] device (tapeb2f2dfe-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:29:13 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.303 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.347 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:a4:15 10.100.0.8'], port_security=['fa:16:3e:de:a4:15 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f79ddcc5-ee21-43e8-9d0d-60476a477361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47175578-eb32-4720-93c5-05fa0d34701f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50b0e23834964280a34973a87d80d1b8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70af5d9c-758a-474c-b5ae-ae59e6fc0f1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67fe5034-d2f9-4314-9752-b56b4f7c4fe3, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=eb2f2dfe-3f62-4bba-8586-84ee449f5ae5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:29:13 compute-0 ovn_controller[97647]: 2026-01-27T19:29:13Z|00099|binding|INFO|Setting lport eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 up in Southbound
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.349 106898 INFO neutron.agent.ovn.metadata.agent [-] Port eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 in datapath 47175578-eb32-4720-93c5-05fa0d34701f bound to our chassis
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.351 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 47175578-eb32-4720-93c5-05fa0d34701f
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.369 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[1639439f-cff2-4545-95a9-fb54d00191b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.369 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap47175578-e1 in ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.373 238834 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap47175578-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.373 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5cda76-195e-4562-a6dc-90eb5e4df834]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.374 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[5b61c6ca-46e5-4ce1-a4f1-e8ff8e3892d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.388 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9753da-4340-4a23-917f-e3548e84edbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.427 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f1b1e2-56c2-44bd-a861-deecbe38859e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.473 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[b1afea24-687a-41f8-a5ec-269c78466a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.484 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1e7e13-4956-4441-b2be-1bbb46823fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 NetworkManager[56191]: <info>  [1769542153.4881] manager: (tap47175578-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Jan 27 19:29:13 compute-0 systemd-udevd[249641]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.522 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[5a602ab4-1d51-4920-8e22-8b70ca7743c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.526 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a1f14b-8d0f-4510-85e2-375066338e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 podman[249650]: 2026-01-27 19:29:13.53278196 +0000 UTC m=+0.114462752 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, version=9.4, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible)
Jan 27 19:29:13 compute-0 NetworkManager[56191]: <info>  [1769542153.5621] device (tap47175578-e0): carrier: link connected
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.572 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[a427d5c6-1425-4943-ab92-49af6eae020f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.591 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6cdd95-37cb-4f84-b8d0-4e6dadff4600]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47175578-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:a0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527013, 'reachable_time': 19548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249691, 'error': None, 'target': 'ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.612 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[22172b88-9c6c-4bd8-be4f-70060ef14486]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:a0d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527013, 'tstamp': 527013}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249692, 'error': None, 'target': 'ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.633 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd9d6d2-abb0-4e8d-9cdd-40a5c8fa30c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47175578-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:a0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527013, 'reachable_time': 19548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249693, 'error': None, 'target': 'ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.677 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[c879b90e-fffd-44b3-8a02-58040bacd165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.758 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[975f2b68-363f-4f96-afcc-c84aaccb2905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.759 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47175578-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.760 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.760 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47175578-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.762 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:13 compute-0 NetworkManager[56191]: <info>  [1769542153.7637] manager: (tap47175578-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 27 19:29:13 compute-0 kernel: tap47175578-e0: entered promiscuous mode
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.773 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.774 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap47175578-e0, col_values=(('external_ids', {'iface-id': '48c695f0-2fed-4bb1-9b53-847c4dc25e7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.775 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:13 compute-0 ovn_controller[97647]: 2026-01-27T19:29:13Z|00100|binding|INFO|Releasing lport 48c695f0-2fed-4bb1-9b53-847c4dc25e7f from this chassis (sb_readonly=0)
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.779 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.779 106898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/47175578-eb32-4720-93c5-05fa0d34701f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/47175578-eb32-4720-93c5-05fa0d34701f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.782 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[3eff09e4-bae4-4efc-a5e4-8565ed358e71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.783 106898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: global
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     log         /dev/log local0 debug
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     log-tag     haproxy-metadata-proxy-47175578-eb32-4720-93c5-05fa0d34701f
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     user        root
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     group       root
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     maxconn     1024
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     pidfile     /var/lib/neutron/external/pids/47175578-eb32-4720-93c5-05fa0d34701f.pid.haproxy
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     daemon
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: defaults
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     log global
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     mode http
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     option httplog
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     option dontlognull
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     option http-server-close
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     option forwardfor
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     retries                 3
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     timeout http-request    30s
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     timeout connect         30s
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     timeout client          32s
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     timeout server          32s
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     timeout http-keep-alive 30s
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: listen listener
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     bind 169.254.169.254:80
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:     http-request add-header X-OVN-Network-ID 47175578-eb32-4720-93c5-05fa0d34701f
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 19:29:13 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:13.783 106898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f', 'env', 'PROCESS_TAG=haproxy-47175578-eb32-4720-93c5-05fa0d34701f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/47175578-eb32-4720-93c5-05fa0d34701f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 19:29:13 compute-0 nova_compute[185480]: 2026-01-27 19:29:13.818 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.156 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542154.1546247, f79ddcc5-ee21-43e8-9d0d-60476a477361 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.157 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] VM Started (Lifecycle Event)
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.203 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.213 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542154.1549132, f79ddcc5-ee21-43e8-9d0d-60476a477361 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.214 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] VM Paused (Lifecycle Event)
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.262 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.270 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.301 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:29:14 compute-0 podman[249730]: 2026-01-27 19:29:14.318631307 +0000 UTC m=+0.055239518 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 19:29:14 compute-0 podman[249730]: 2026-01-27 19:29:14.528064896 +0000 UTC m=+0.264673047 container create 082ecdf1117747b38ce7c47311c58c92ecd3700af56126a050ded889894b097c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:29:14 compute-0 systemd[1]: Started libpod-conmon-082ecdf1117747b38ce7c47311c58c92ecd3700af56126a050ded889894b097c.scope.
Jan 27 19:29:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 19:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84797be3328e4a6c68ba9ff5f2810c14b140d3c335090f397cecb2bfecf8abd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 19:29:14 compute-0 podman[249730]: 2026-01-27 19:29:14.683942703 +0000 UTC m=+0.420550844 container init 082ecdf1117747b38ce7c47311c58c92ecd3700af56126a050ded889894b097c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 19:29:14 compute-0 podman[249730]: 2026-01-27 19:29:14.694286118 +0000 UTC m=+0.430894259 container start 082ecdf1117747b38ce7c47311c58c92ecd3700af56126a050ded889894b097c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 19:29:14 compute-0 neutron-haproxy-ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f[249744]: [NOTICE]   (249748) : New worker (249750) forked
Jan 27 19:29:14 compute-0 neutron-haproxy-ovnmeta-47175578-eb32-4720-93c5-05fa0d34701f[249744]: [NOTICE]   (249748) : Loading success.
Jan 27 19:29:14 compute-0 nova_compute[185480]: 2026-01-27 19:29:14.922 185484 DEBUG nova.objects.instance [None req-16f2285d-aba8-4520-b190-f35caf022a5c 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lazy-loading 'flavor' on Instance uuid be961482-e05a-4655-96ea-7d4810738a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:29:15 compute-0 nova_compute[185480]: 2026-01-27 19:29:15.060 185484 DEBUG oslo_concurrency.lockutils [None req-16f2285d-aba8-4520-b190-f35caf022a5c 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:15 compute-0 nova_compute[185480]: 2026-01-27 19:29:15.060 185484 DEBUG oslo_concurrency.lockutils [None req-16f2285d-aba8-4520-b190-f35caf022a5c 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquired lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:15 compute-0 nova_compute[185480]: 2026-01-27 19:29:15.066 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:15 compute-0 nova_compute[185480]: 2026-01-27 19:29:15.497 185484 DEBUG nova.network.neutron [req-c60a1a9a-0872-4e25-8429-7befeb56b59f req-30f754df-1dcf-4bda-bfc7-3c3a1014b549 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updated VIF entry in instance network info cache for port eb2f2dfe-3f62-4bba-8586-84ee449f5ae5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:29:15 compute-0 nova_compute[185480]: 2026-01-27 19:29:15.497 185484 DEBUG nova.network.neutron [req-c60a1a9a-0872-4e25-8429-7befeb56b59f req-30f754df-1dcf-4bda-bfc7-3c3a1014b549 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updating instance_info_cache with network_info: [{"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:15 compute-0 nova_compute[185480]: 2026-01-27 19:29:15.563 185484 DEBUG oslo_concurrency.lockutils [req-c60a1a9a-0872-4e25-8429-7befeb56b59f req-30f754df-1dcf-4bda-bfc7-3c3a1014b549 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:16 compute-0 nova_compute[185480]: 2026-01-27 19:29:16.855 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:18 compute-0 nova_compute[185480]: 2026-01-27 19:29:18.305 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:18 compute-0 nova_compute[185480]: 2026-01-27 19:29:18.348 185484 DEBUG nova.network.neutron [None req-16f2285d-aba8-4520-b190-f35caf022a5c 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:29:18 compute-0 nova_compute[185480]: 2026-01-27 19:29:18.492 185484 DEBUG nova.compute.manager [req-c29fea17-57f6-4b9c-9a3b-86e89bd5e767 req-f4d919bd-0f4b-44b1-947d-2bb2632c4def bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-changed-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:18 compute-0 nova_compute[185480]: 2026-01-27 19:29:18.493 185484 DEBUG nova.compute.manager [req-c29fea17-57f6-4b9c-9a3b-86e89bd5e767 req-f4d919bd-0f4b-44b1-947d-2bb2632c4def bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Refreshing instance network info cache due to event network-changed-b0ee253c-f6f7-42b0-906e-b575d0104fbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:29:18 compute-0 nova_compute[185480]: 2026-01-27 19:29:18.494 185484 DEBUG oslo_concurrency.lockutils [req-c29fea17-57f6-4b9c-9a3b-86e89bd5e767 req-f4d919bd-0f4b-44b1-947d-2bb2632c4def bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.416 185484 DEBUG nova.compute.manager [req-63a7097a-c72b-4478-8981-b8568599c9ae req-0a60b389-88ec-4bdc-ae37-b6651d58164c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Received event network-vif-plugged-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.417 185484 DEBUG oslo_concurrency.lockutils [req-63a7097a-c72b-4478-8981-b8568599c9ae req-0a60b389-88ec-4bdc-ae37-b6651d58164c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.417 185484 DEBUG oslo_concurrency.lockutils [req-63a7097a-c72b-4478-8981-b8568599c9ae req-0a60b389-88ec-4bdc-ae37-b6651d58164c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.418 185484 DEBUG oslo_concurrency.lockutils [req-63a7097a-c72b-4478-8981-b8568599c9ae req-0a60b389-88ec-4bdc-ae37-b6651d58164c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.419 185484 DEBUG nova.compute.manager [req-63a7097a-c72b-4478-8981-b8568599c9ae req-0a60b389-88ec-4bdc-ae37-b6651d58164c bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Processing event network-vif-plugged-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.420 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.429 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542159.4289947, f79ddcc5-ee21-43e8-9d0d-60476a477361 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.430 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] VM Resumed (Lifecycle Event)
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.433 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.444 185484 INFO nova.virt.libvirt.driver [-] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Instance spawned successfully.
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.445 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.457 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.475 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.483 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.485 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.486 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.486 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.487 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.487 185484 DEBUG nova.virt.libvirt.driver [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.500 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.601 185484 INFO nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Took 23.45 seconds to spawn the instance on the hypervisor.
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.602 185484 DEBUG nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.697 185484 INFO nova.compute.manager [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Took 24.75 seconds to build instance.
Jan 27 19:29:19 compute-0 nova_compute[185480]: 2026-01-27 19:29:19.835 185484 DEBUG oslo_concurrency.lockutils [None req-17324275-26ef-4fd2-bb1a-8f8bda66be8e 79532101c66342a980a90799ac41a442 50b0e23834964280a34973a87d80d1b8 - - default default] Lock "f79ddcc5-ee21-43e8-9d0d-60476a477361" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:20 compute-0 podman[249760]: 2026-01-27 19:29:20.36286112 +0000 UTC m=+0.128264533 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter)
Jan 27 19:29:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:20.541 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:20.541 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:20.542 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:21 compute-0 nova_compute[185480]: 2026-01-27 19:29:21.858 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:22 compute-0 nova_compute[185480]: 2026-01-27 19:29:22.575 185484 DEBUG nova.network.neutron [None req-16f2285d-aba8-4520-b190-f35caf022a5c 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:22 compute-0 nova_compute[185480]: 2026-01-27 19:29:22.678 185484 DEBUG oslo_concurrency.lockutils [None req-16f2285d-aba8-4520-b190-f35caf022a5c 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Releasing lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:22 compute-0 nova_compute[185480]: 2026-01-27 19:29:22.680 185484 DEBUG nova.compute.manager [None req-16f2285d-aba8-4520-b190-f35caf022a5c 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 27 19:29:22 compute-0 nova_compute[185480]: 2026-01-27 19:29:22.680 185484 DEBUG nova.compute.manager [None req-16f2285d-aba8-4520-b190-f35caf022a5c 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] network_info to inject: |[{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 27 19:29:22 compute-0 nova_compute[185480]: 2026-01-27 19:29:22.686 185484 DEBUG oslo_concurrency.lockutils [req-c29fea17-57f6-4b9c-9a3b-86e89bd5e767 req-f4d919bd-0f4b-44b1-947d-2bb2632c4def bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:22 compute-0 nova_compute[185480]: 2026-01-27 19:29:22.687 185484 DEBUG nova.network.neutron [req-c29fea17-57f6-4b9c-9a3b-86e89bd5e767 req-f4d919bd-0f4b-44b1-947d-2bb2632c4def bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Refreshing network info cache for port b0ee253c-f6f7-42b0-906e-b575d0104fbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:29:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:22.985 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:29:22 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:22.988 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:29:23 compute-0 nova_compute[185480]: 2026-01-27 19:29:23.009 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:23 compute-0 nova_compute[185480]: 2026-01-27 19:29:23.308 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:23 compute-0 nova_compute[185480]: 2026-01-27 19:29:23.430 185484 DEBUG nova.compute.manager [req-fb130cdb-b84e-4916-a5d6-075cf4f863be req-7088bad8-58ca-4e4c-b4b3-8e1d89e41729 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Received event network-vif-plugged-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:23 compute-0 nova_compute[185480]: 2026-01-27 19:29:23.431 185484 DEBUG oslo_concurrency.lockutils [req-fb130cdb-b84e-4916-a5d6-075cf4f863be req-7088bad8-58ca-4e4c-b4b3-8e1d89e41729 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:23 compute-0 nova_compute[185480]: 2026-01-27 19:29:23.432 185484 DEBUG oslo_concurrency.lockutils [req-fb130cdb-b84e-4916-a5d6-075cf4f863be req-7088bad8-58ca-4e4c-b4b3-8e1d89e41729 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:23 compute-0 nova_compute[185480]: 2026-01-27 19:29:23.432 185484 DEBUG oslo_concurrency.lockutils [req-fb130cdb-b84e-4916-a5d6-075cf4f863be req-7088bad8-58ca-4e4c-b4b3-8e1d89e41729 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "f79ddcc5-ee21-43e8-9d0d-60476a477361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:23 compute-0 nova_compute[185480]: 2026-01-27 19:29:23.433 185484 DEBUG nova.compute.manager [req-fb130cdb-b84e-4916-a5d6-075cf4f863be req-7088bad8-58ca-4e4c-b4b3-8e1d89e41729 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] No waiting events found dispatching network-vif-plugged-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:29:23 compute-0 nova_compute[185480]: 2026-01-27 19:29:23.434 185484 WARNING nova.compute.manager [req-fb130cdb-b84e-4916-a5d6-075cf4f863be req-7088bad8-58ca-4e4c-b4b3-8e1d89e41729 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Received unexpected event network-vif-plugged-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 for instance with vm_state active and task_state None.
Jan 27 19:29:25 compute-0 nova_compute[185480]: 2026-01-27 19:29:25.669 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:25 compute-0 nova_compute[185480]: 2026-01-27 19:29:25.971 185484 DEBUG nova.objects.instance [None req-a59f1bda-9f1a-431b-b580-2da94d542479 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lazy-loading 'flavor' on Instance uuid be961482-e05a-4655-96ea-7d4810738a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:29:26 compute-0 nova_compute[185480]: 2026-01-27 19:29:26.001 185484 DEBUG oslo_concurrency.lockutils [None req-a59f1bda-9f1a-431b-b580-2da94d542479 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:26 compute-0 nova_compute[185480]: 2026-01-27 19:29:26.257 185484 DEBUG nova.compute.manager [req-e911b2e4-d5dd-4883-a543-5c0fbfe812fa req-ce306b88-734f-4113-b150-41a3f89d86e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Received event network-changed-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:26 compute-0 nova_compute[185480]: 2026-01-27 19:29:26.258 185484 DEBUG nova.compute.manager [req-e911b2e4-d5dd-4883-a543-5c0fbfe812fa req-ce306b88-734f-4113-b150-41a3f89d86e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Refreshing instance network info cache due to event network-changed-eb2f2dfe-3f62-4bba-8586-84ee449f5ae5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:29:26 compute-0 nova_compute[185480]: 2026-01-27 19:29:26.259 185484 DEBUG oslo_concurrency.lockutils [req-e911b2e4-d5dd-4883-a543-5c0fbfe812fa req-ce306b88-734f-4113-b150-41a3f89d86e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:26 compute-0 nova_compute[185480]: 2026-01-27 19:29:26.260 185484 DEBUG oslo_concurrency.lockutils [req-e911b2e4-d5dd-4883-a543-5c0fbfe812fa req-ce306b88-734f-4113-b150-41a3f89d86e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:26 compute-0 nova_compute[185480]: 2026-01-27 19:29:26.261 185484 DEBUG nova.network.neutron [req-e911b2e4-d5dd-4883-a543-5c0fbfe812fa req-ce306b88-734f-4113-b150-41a3f89d86e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Refreshing network info cache for port eb2f2dfe-3f62-4bba-8586-84ee449f5ae5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:29:26 compute-0 nova_compute[185480]: 2026-01-27 19:29:26.861 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:26 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:26.991 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:27 compute-0 podman[249781]: 2026-01-27 19:29:27.351423371 +0000 UTC m=+0.118055670 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.489 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "574ebdb2-135f-42aa-aa9a-4d986501daf4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.490 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.518 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.563 185484 DEBUG nova.network.neutron [req-c29fea17-57f6-4b9c-9a3b-86e89bd5e767 req-f4d919bd-0f4b-44b1-947d-2bb2632c4def bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updated VIF entry in instance network info cache for port b0ee253c-f6f7-42b0-906e-b575d0104fbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.564 185484 DEBUG nova.network.neutron [req-c29fea17-57f6-4b9c-9a3b-86e89bd5e767 req-f4d919bd-0f4b-44b1-947d-2bb2632c4def bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.598 185484 DEBUG oslo_concurrency.lockutils [req-c29fea17-57f6-4b9c-9a3b-86e89bd5e767 req-f4d919bd-0f4b-44b1-947d-2bb2632c4def bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.599 185484 DEBUG oslo_concurrency.lockutils [None req-a59f1bda-9f1a-431b-b580-2da94d542479 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquired lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.643 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.645 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.670 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.670 185484 INFO nova.compute.claims [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:29:27 compute-0 nova_compute[185480]: 2026-01-27 19:29:27.977 185484 DEBUG nova.compute.provider_tree [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.011 185484 DEBUG nova.scheduler.client.report [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.161 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.162 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.219 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.220 185484 DEBUG nova.network.neutron [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.241 185484 INFO nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.269 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.313 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.425 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.427 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.428 185484 INFO nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Creating image(s)
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.428 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "/var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.429 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "/var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.430 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "/var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.455 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.534 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.535 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "69104a6fcf619df4c492b27a202c23b5821c0e32" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.537 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.565 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.647 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.649 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.700 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.702 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.703 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.771 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.772 185484 DEBUG nova.virt.disk.api [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Checking if we can resize image /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.772 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.828 185484 DEBUG nova.policy [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '58fd103107724704ada2dddb2ed9734d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea88acf7f2624e02bf32ef2e9e9dc9df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.839 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.840 185484 DEBUG nova.virt.disk.api [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Cannot resize image /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:29:28 compute-0 nova_compute[185480]: 2026-01-27 19:29:28.840 185484 DEBUG nova.objects.instance [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lazy-loading 'migration_context' on Instance uuid 574ebdb2-135f-42aa-aa9a-4d986501daf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:29:29 compute-0 nova_compute[185480]: 2026-01-27 19:29:29.185 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:29:29 compute-0 nova_compute[185480]: 2026-01-27 19:29:29.185 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Ensure instance console log exists: /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:29:29 compute-0 nova_compute[185480]: 2026-01-27 19:29:29.186 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:29 compute-0 nova_compute[185480]: 2026-01-27 19:29:29.186 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:29 compute-0 nova_compute[185480]: 2026-01-27 19:29:29.187 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:29 compute-0 nova_compute[185480]: 2026-01-27 19:29:29.480 185484 DEBUG nova.network.neutron [req-e911b2e4-d5dd-4883-a543-5c0fbfe812fa req-ce306b88-734f-4113-b150-41a3f89d86e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updated VIF entry in instance network info cache for port eb2f2dfe-3f62-4bba-8586-84ee449f5ae5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:29:29 compute-0 nova_compute[185480]: 2026-01-27 19:29:29.481 185484 DEBUG nova.network.neutron [req-e911b2e4-d5dd-4883-a543-5c0fbfe812fa req-ce306b88-734f-4113-b150-41a3f89d86e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updating instance_info_cache with network_info: [{"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:29 compute-0 nova_compute[185480]: 2026-01-27 19:29:29.510 185484 DEBUG oslo_concurrency.lockutils [req-e911b2e4-d5dd-4883-a543-5c0fbfe812fa req-ce306b88-734f-4113-b150-41a3f89d86e8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:29 compute-0 podman[201378]: time="2026-01-27T19:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:29:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 19:29:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4839 "" "Go-http-client/1.1"
Jan 27 19:29:30 compute-0 sshd-session[249816]: Connection closed by authenticating user root 156.227.233.86 port 38532 [preauth]
Jan 27 19:29:30 compute-0 podman[249821]: 2026-01-27 19:29:30.35132517 +0000 UTC m=+0.101688677 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 19:29:30 compute-0 podman[249818]: 2026-01-27 19:29:30.362118536 +0000 UTC m=+0.120378269 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 19:29:30 compute-0 podman[249820]: 2026-01-27 19:29:30.409761604 +0000 UTC m=+0.158666375 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:29:30 compute-0 nova_compute[185480]: 2026-01-27 19:29:30.900 185484 DEBUG nova.network.neutron [None req-a59f1bda-9f1a-431b-b580-2da94d542479 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:29:31 compute-0 sshd-session[249819]: Connection closed by authenticating user root 156.227.233.86 port 52594 [preauth]
Jan 27 19:29:31 compute-0 nova_compute[185480]: 2026-01-27 19:29:31.101 185484 DEBUG nova.compute.manager [req-e61cc371-28bf-4e67-a9a9-919421e8a40f req-dad88d9b-985e-4010-b590-b6002ccedb81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-changed-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:31 compute-0 nova_compute[185480]: 2026-01-27 19:29:31.102 185484 DEBUG nova.compute.manager [req-e61cc371-28bf-4e67-a9a9-919421e8a40f req-dad88d9b-985e-4010-b590-b6002ccedb81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Refreshing instance network info cache due to event network-changed-b0ee253c-f6f7-42b0-906e-b575d0104fbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:29:31 compute-0 nova_compute[185480]: 2026-01-27 19:29:31.103 185484 DEBUG oslo_concurrency.lockutils [req-e61cc371-28bf-4e67-a9a9-919421e8a40f req-dad88d9b-985e-4010-b590-b6002ccedb81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:31 compute-0 openstack_network_exporter[204477]: ERROR   19:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:29:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:29:31 compute-0 openstack_network_exporter[204477]: ERROR   19:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:29:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:29:31 compute-0 nova_compute[185480]: 2026-01-27 19:29:31.584 185484 DEBUG nova.network.neutron [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Successfully created port: a8863494-47b5-4610-a914-aacdead1041c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 19:29:31 compute-0 nova_compute[185480]: 2026-01-27 19:29:31.866 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:32 compute-0 sshd-session[249881]: Connection closed by authenticating user root 156.227.233.86 port 37802 [preauth]
Jan 27 19:29:33 compute-0 sshd-session[249883]: Connection closed by authenticating user root 156.227.233.86 port 47506 [preauth]
Jan 27 19:29:33 compute-0 nova_compute[185480]: 2026-01-27 19:29:33.197 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:33 compute-0 nova_compute[185480]: 2026-01-27 19:29:33.315 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:34 compute-0 sshd-session[249885]: Connection closed by authenticating user root 156.227.233.86 port 33520 [preauth]
Jan 27 19:29:34 compute-0 nova_compute[185480]: 2026-01-27 19:29:34.572 185484 DEBUG nova.network.neutron [None req-a59f1bda-9f1a-431b-b580-2da94d542479 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:34 compute-0 nova_compute[185480]: 2026-01-27 19:29:34.613 185484 DEBUG oslo_concurrency.lockutils [None req-a59f1bda-9f1a-431b-b580-2da94d542479 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Releasing lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:34 compute-0 nova_compute[185480]: 2026-01-27 19:29:34.615 185484 DEBUG nova.compute.manager [None req-a59f1bda-9f1a-431b-b580-2da94d542479 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 27 19:29:34 compute-0 nova_compute[185480]: 2026-01-27 19:29:34.616 185484 DEBUG nova.compute.manager [None req-a59f1bda-9f1a-431b-b580-2da94d542479 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] network_info to inject: |[{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 27 19:29:34 compute-0 nova_compute[185480]: 2026-01-27 19:29:34.621 185484 DEBUG oslo_concurrency.lockutils [req-e61cc371-28bf-4e67-a9a9-919421e8a40f req-dad88d9b-985e-4010-b590-b6002ccedb81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:34 compute-0 nova_compute[185480]: 2026-01-27 19:29:34.622 185484 DEBUG nova.network.neutron [req-e61cc371-28bf-4e67-a9a9-919421e8a40f req-dad88d9b-985e-4010-b590-b6002ccedb81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Refreshing network info cache for port b0ee253c-f6f7-42b0-906e-b575d0104fbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:29:35 compute-0 sshd-session[249887]: Connection closed by authenticating user root 156.227.233.86 port 45338 [preauth]
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.610 185484 DEBUG nova.network.neutron [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Successfully updated port: a8863494-47b5-4610-a914-aacdead1041c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.672 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.673 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquired lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.674 185484 DEBUG nova.network.neutron [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.878 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "be961482-e05a-4655-96ea-7d4810738a3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.879 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.880 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "be961482-e05a-4655-96ea-7d4810738a3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.880 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.881 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.882 185484 INFO nova.compute.manager [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Terminating instance
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.885 185484 DEBUG nova.compute.manager [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:29:35 compute-0 kernel: tapb0ee253c-f6 (unregistering): left promiscuous mode
Jan 27 19:29:35 compute-0 NetworkManager[56191]: <info>  [1769542175.9268] device (tapb0ee253c-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.943 185484 DEBUG nova.compute.manager [req-8f8dadcf-2f8c-4d25-bb91-a1d95bb16e45 req-32093ecb-8bbe-41af-b574-0edb3ad0217f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Received event network-changed-a8863494-47b5-4610-a914-aacdead1041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.944 185484 DEBUG nova.compute.manager [req-8f8dadcf-2f8c-4d25-bb91-a1d95bb16e45 req-32093ecb-8bbe-41af-b574-0edb3ad0217f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Refreshing instance network info cache due to event network-changed-a8863494-47b5-4610-a914-aacdead1041c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.944 185484 DEBUG oslo_concurrency.lockutils [req-8f8dadcf-2f8c-4d25-bb91-a1d95bb16e45 req-32093ecb-8bbe-41af-b574-0edb3ad0217f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:35 compute-0 ovn_controller[97647]: 2026-01-27T19:29:35Z|00101|binding|INFO|Releasing lport b0ee253c-f6f7-42b0-906e-b575d0104fbb from this chassis (sb_readonly=0)
Jan 27 19:29:35 compute-0 ovn_controller[97647]: 2026-01-27T19:29:35Z|00102|binding|INFO|Setting lport b0ee253c-f6f7-42b0-906e-b575d0104fbb down in Southbound
Jan 27 19:29:35 compute-0 ovn_controller[97647]: 2026-01-27T19:29:35Z|00103|binding|INFO|Removing iface tapb0ee253c-f6 ovn-installed in OVS
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.950 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:35 compute-0 nova_compute[185480]: 2026-01-27 19:29:35.970 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:35.974 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:60:a3 10.100.0.13'], port_security=['fa:16:3e:d8:60:a3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'be961482-e05a-4655-96ea-7d4810738a3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4102701402ff4f059dd67182960e5b64', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'eda9f509-731f-435f-adb5-63c842fd18c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa76a92b-3621-45ba-b3a2-0a9908c3ee96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=b0ee253c-f6f7-42b0-906e-b575d0104fbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:29:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:35.975 106898 INFO neutron.agent.ovn.metadata.agent [-] Port b0ee253c-f6f7-42b0-906e-b575d0104fbb in datapath 400ee5e7-0154-41c9-b068-b9b4ba0c2fdc unbound from our chassis
Jan 27 19:29:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:35.978 106898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 400ee5e7-0154-41c9-b068-b9b4ba0c2fdc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 19:29:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:35.979 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f6f835-fc25-4368-8f2c-2398520a1c8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:35 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:35.980 106898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc namespace which is not needed anymore
Jan 27 19:29:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 27 19:29:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 46.856s CPU time.
Jan 27 19:29:35 compute-0 systemd-machined[156762]: Machine qemu-7-instance-00000007 terminated.
Jan 27 19:29:36 compute-0 sshd-session[249889]: Connection closed by authenticating user root 156.227.233.86 port 59630 [preauth]
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.119 185484 DEBUG nova.network.neutron [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.127 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.137 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.164 185484 INFO nova.virt.libvirt.driver [-] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Instance destroyed successfully.
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.165 185484 DEBUG nova.objects.instance [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lazy-loading 'resources' on Instance uuid be961482-e05a-4655-96ea-7d4810738a3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.193 185484 DEBUG nova.virt.libvirt.vif [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1779555975',display_name='tempest-AttachInterfacesUnderV243Test-server-1779555975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1779555975',id=7,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyqdFH2eG14J/b7rtsScThh8sRg5o4HAKOX+Lemc285oMyiai6RHuVK35i3ylocgEr0tzDFqxutYlO+GlSDEWhiQGDeQuYFNXRaqskh88JGgZZcqOJFgm88hLGt14KInw==',key_name='tempest-keypair-1143519939',keypairs=<?>,launch_index=0,launched_at=2026-01-27T19:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4102701402ff4f059dd67182960e5b64',ramdisk_id='',reservation_id='r-yvft0q2g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-66798845',owner_user_name='tempest-AttachInterfacesUnderV243Test-66798845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T19:29:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87274dd877fa476fa885a5665ba052ec',uuid=be961482-e05a-4655-96ea-7d4810738a3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.193 185484 DEBUG nova.network.os_vif_util [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Converting VIF {"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.194 185484 DEBUG nova.network.os_vif_util [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:60:a3,bridge_name='br-int',has_traffic_filtering=True,id=b0ee253c-f6f7-42b0-906e-b575d0104fbb,network=Network(400ee5e7-0154-41c9-b068-b9b4ba0c2fdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0ee253c-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.195 185484 DEBUG os_vif [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:60:a3,bridge_name='br-int',has_traffic_filtering=True,id=b0ee253c-f6f7-42b0-906e-b575d0104fbb,network=Network(400ee5e7-0154-41c9-b068-b9b4ba0c2fdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0ee253c-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.197 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.197 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ee253c-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.204 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.208 185484 INFO os_vif [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:60:a3,bridge_name='br-int',has_traffic_filtering=True,id=b0ee253c-f6f7-42b0-906e-b575d0104fbb,network=Network(400ee5e7-0154-41c9-b068-b9b4ba0c2fdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0ee253c-f6')
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.209 185484 INFO nova.virt.libvirt.driver [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Deleting instance files /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c_del
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.210 185484 INFO nova.virt.libvirt.driver [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Deletion of /var/lib/nova/instances/be961482-e05a-4655-96ea-7d4810738a3c_del complete
Jan 27 19:29:36 compute-0 neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc[248925]: [NOTICE]   (248929) : haproxy version is 2.8.14-c23fe91
Jan 27 19:29:36 compute-0 neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc[248925]: [NOTICE]   (248929) : path to executable is /usr/sbin/haproxy
Jan 27 19:29:36 compute-0 neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc[248925]: [WARNING]  (248929) : Exiting Master process...
Jan 27 19:29:36 compute-0 neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc[248925]: [WARNING]  (248929) : Exiting Master process...
Jan 27 19:29:36 compute-0 neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc[248925]: [ALERT]    (248929) : Current worker (248931) exited with code 143 (Terminated)
Jan 27 19:29:36 compute-0 neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc[248925]: [WARNING]  (248929) : All workers exited. Exiting... (0)
Jan 27 19:29:36 compute-0 systemd[1]: libpod-e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8.scope: Deactivated successfully.
Jan 27 19:29:36 compute-0 podman[249917]: 2026-01-27 19:29:36.356605041 +0000 UTC m=+0.209780200 container died e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 19:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8-userdata-shm.mount: Deactivated successfully.
Jan 27 19:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-953c46d3744a30a58fa35cb21e01889abd6bc14e4175c98ca80a9c71a6102cdf-merged.mount: Deactivated successfully.
Jan 27 19:29:36 compute-0 podman[249917]: 2026-01-27 19:29:36.492466031 +0000 UTC m=+0.345641200 container cleanup e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.511 185484 INFO nova.compute.manager [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.512 185484 DEBUG oslo.service.loopingcall [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.513 185484 DEBUG nova.compute.manager [-] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.513 185484 DEBUG nova.network.neutron [-] [instance: be961482-e05a-4655-96ea-7d4810738a3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:29:36 compute-0 systemd[1]: libpod-conmon-e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8.scope: Deactivated successfully.
Jan 27 19:29:36 compute-0 podman[249963]: 2026-01-27 19:29:36.660029435 +0000 UTC m=+0.117892157 container remove e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.675 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[90cdd753-8e66-4411-a3b7-71fc699c0588]: (4, ('Tue Jan 27 07:29:36 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc (e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8)\ne8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8\nTue Jan 27 07:29:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc (e8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8)\ne8fe01e4fc813daa728d38ce8332cfbb2959ab579193cbf68096519d4c3dcfa8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.678 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[37343cc1-a266-4b8f-a8a5-7f977a0fab61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.679 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap400ee5e7-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:36 compute-0 kernel: tap400ee5e7-00: left promiscuous mode
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.700 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.702 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[96181992-5911-4562-af40-b4f82e31b6a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:36 compute-0 nova_compute[185480]: 2026-01-27 19:29:36.710 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.723 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[612fac95-b537-4488-bf1d-b73f1bf57875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.725 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[682c08cc-5ae2-41f8-ad5f-1d16b8c7eacc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.748 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[47ee6d0a-0837-4e59-90cb-50486055cde0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519904, 'reachable_time': 40253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249977, 'error': None, 'target': 'ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d400ee5e7\x2d0154\x2d41c9\x2db068\x2db9b4ba0c2fdc.mount: Deactivated successfully.
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.757 107353 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-400ee5e7-0154-41c9-b068-b9b4ba0c2fdc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 19:29:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:36.757 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[08ea81dd-f782-410d-8433-40417d23b923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:37 compute-0 sshd-session[249943]: Connection closed by authenticating user root 156.227.233.86 port 47008 [preauth]
Jan 27 19:29:38 compute-0 nova_compute[185480]: 2026-01-27 19:29:38.319 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:38 compute-0 nova_compute[185480]: 2026-01-27 19:29:38.458 185484 DEBUG nova.compute.manager [req-f9b2a51f-9441-448b-bb99-a80e1fdafe01 req-8bcc2760-0dea-4f2e-8836-67078c2d01d4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-vif-unplugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:38 compute-0 nova_compute[185480]: 2026-01-27 19:29:38.458 185484 DEBUG oslo_concurrency.lockutils [req-f9b2a51f-9441-448b-bb99-a80e1fdafe01 req-8bcc2760-0dea-4f2e-8836-67078c2d01d4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "be961482-e05a-4655-96ea-7d4810738a3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:38 compute-0 nova_compute[185480]: 2026-01-27 19:29:38.459 185484 DEBUG oslo_concurrency.lockutils [req-f9b2a51f-9441-448b-bb99-a80e1fdafe01 req-8bcc2760-0dea-4f2e-8836-67078c2d01d4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:38 compute-0 nova_compute[185480]: 2026-01-27 19:29:38.459 185484 DEBUG oslo_concurrency.lockutils [req-f9b2a51f-9441-448b-bb99-a80e1fdafe01 req-8bcc2760-0dea-4f2e-8836-67078c2d01d4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:38 compute-0 nova_compute[185480]: 2026-01-27 19:29:38.460 185484 DEBUG nova.compute.manager [req-f9b2a51f-9441-448b-bb99-a80e1fdafe01 req-8bcc2760-0dea-4f2e-8836-67078c2d01d4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] No waiting events found dispatching network-vif-unplugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:29:38 compute-0 nova_compute[185480]: 2026-01-27 19:29:38.460 185484 DEBUG nova.compute.manager [req-f9b2a51f-9441-448b-bb99-a80e1fdafe01 req-8bcc2760-0dea-4f2e-8836-67078c2d01d4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-vif-unplugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 19:29:38 compute-0 sshd-session[249979]: Connection closed by authenticating user root 156.227.233.86 port 59694 [preauth]
Jan 27 19:29:39 compute-0 nova_compute[185480]: 2026-01-27 19:29:39.182 185484 DEBUG nova.network.neutron [req-e61cc371-28bf-4e67-a9a9-919421e8a40f req-dad88d9b-985e-4010-b590-b6002ccedb81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updated VIF entry in instance network info cache for port b0ee253c-f6f7-42b0-906e-b575d0104fbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:29:39 compute-0 nova_compute[185480]: 2026-01-27 19:29:39.183 185484 DEBUG nova.network.neutron [req-e61cc371-28bf-4e67-a9a9-919421e8a40f req-dad88d9b-985e-4010-b590-b6002ccedb81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [{"id": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "address": "fa:16:3e:d8:60:a3", "network": {"id": "400ee5e7-0154-41c9-b068-b9b4ba0c2fdc", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1659482718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4102701402ff4f059dd67182960e5b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0ee253c-f6", "ovs_interfaceid": "b0ee253c-f6f7-42b0-906e-b575d0104fbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:39 compute-0 nova_compute[185480]: 2026-01-27 19:29:39.280 185484 DEBUG oslo_concurrency.lockutils [req-e61cc371-28bf-4e67-a9a9-919421e8a40f req-dad88d9b-985e-4010-b590-b6002ccedb81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-be961482-e05a-4655-96ea-7d4810738a3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:39 compute-0 sshd-session[249981]: Connection closed by authenticating user root 156.227.233.86 port 55008 [preauth]
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.143 185484 DEBUG nova.network.neutron [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Updating instance_info_cache with network_info: [{"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.208 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.238 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Releasing lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.239 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Instance network_info: |[{"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.240 185484 DEBUG oslo_concurrency.lockutils [req-8f8dadcf-2f8c-4d25-bb91-a1d95bb16e45 req-32093ecb-8bbe-41af-b574-0edb3ad0217f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.241 185484 DEBUG nova.network.neutron [req-8f8dadcf-2f8c-4d25-bb91-a1d95bb16e45 req-32093ecb-8bbe-41af-b574-0edb3ad0217f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Refreshing network info cache for port a8863494-47b5-4610-a914-aacdead1041c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.246 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Start _get_guest_xml network_info=[{"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.258 185484 WARNING nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.278 185484 DEBUG nova.virt.libvirt.host [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.280 185484 DEBUG nova.virt.libvirt.host [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.287 185484 DEBUG nova.virt.libvirt.host [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.288 185484 DEBUG nova.virt.libvirt.host [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.290 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.291 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:26:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='49f81b8c-e0df-4a53-87c6-69576be59651',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.293 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.293 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.294 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.295 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.296 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.297 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.297 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.298 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.299 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.300 185484 DEBUG nova.virt.hardware [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.307 185484 DEBUG nova.virt.libvirt.vif [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:29:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-92882849',display_name='tempest-ServersTestManualDisk-server-92882849',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-92882849',id=10,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWH67S/CDsqtfVH48IJT4mJqV7UbvuQiNWsX6gJu/0ev8/JNVPO6ly3ja2il3UJmo/KTtjVTzgT3BzsV3CcfQZDIcAIi6hS50gaDqoMoirhUVpAZpFINMdSg4KESVJT2w==',key_name='tempest-keypair-61082741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea88acf7f2624e02bf32ef2e9e9dc9df',ramdisk_id='',reservation_id='r-89xfakeg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-115566997',owner_user_name='tempest-ServersTestManualDisk-115566997-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:29:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58fd103107724704ada2dddb2ed9734d',uuid=574ebdb2-135f-42aa-aa9a-4d986501daf4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.308 185484 DEBUG nova.network.os_vif_util [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Converting VIF {"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.310 185484 DEBUG nova.network.os_vif_util [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:05,bridge_name='br-int',has_traffic_filtering=True,id=a8863494-47b5-4610-a914-aacdead1041c,network=Network(57c7aa1f-1c4a-44de-8222-7196a10c62b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8863494-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.312 185484 DEBUG nova.objects.instance [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lazy-loading 'pci_devices' on Instance uuid 574ebdb2-135f-42aa-aa9a-4d986501daf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.344 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <uuid>574ebdb2-135f-42aa-aa9a-4d986501daf4</uuid>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <name>instance-0000000a</name>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <memory>131072</memory>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <nova:name>tempest-ServersTestManualDisk-server-92882849</nova:name>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:29:40</nova:creationTime>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <nova:flavor name="m1.nano">
Jan 27 19:29:40 compute-0 nova_compute[185480]:         <nova:memory>128</nova:memory>
Jan 27 19:29:40 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:29:40 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:29:40 compute-0 nova_compute[185480]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 19:29:40 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:29:40 compute-0 nova_compute[185480]:         <nova:user uuid="58fd103107724704ada2dddb2ed9734d">tempest-ServersTestManualDisk-115566997-project-member</nova:user>
Jan 27 19:29:40 compute-0 nova_compute[185480]:         <nova:project uuid="ea88acf7f2624e02bf32ef2e9e9dc9df">tempest-ServersTestManualDisk-115566997</nova:project>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="729797c6-2677-44bd-a4a8-949d1f57b0a2"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:29:40 compute-0 nova_compute[185480]:         <nova:port uuid="a8863494-47b5-4610-a914-aacdead1041c">
Jan 27 19:29:40 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <system>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <entry name="serial">574ebdb2-135f-42aa-aa9a-4d986501daf4</entry>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <entry name="uuid">574ebdb2-135f-42aa-aa9a-4d986501daf4</entry>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </system>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <os>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   </os>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <features>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   </features>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk.config"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:b8:61:05"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <target dev="tapa8863494-47"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/console.log" append="off"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <video>
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </video>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:29:40 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:29:40 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:29:40 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:29:40 compute-0 nova_compute[185480]: </domain>
Jan 27 19:29:40 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.345 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Preparing to wait for external event network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.345 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.346 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.346 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.347 185484 DEBUG nova.virt.libvirt.vif [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T19:29:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-92882849',display_name='tempest-ServersTestManualDisk-server-92882849',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-92882849',id=10,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWH67S/CDsqtfVH48IJT4mJqV7UbvuQiNWsX6gJu/0ev8/JNVPO6ly3ja2il3UJmo/KTtjVTzgT3BzsV3CcfQZDIcAIi6hS50gaDqoMoirhUVpAZpFINMdSg4KESVJT2w==',key_name='tempest-keypair-61082741',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea88acf7f2624e02bf32ef2e9e9dc9df',ramdisk_id='',reservation_id='r-89xfakeg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-115566997',owner_user_name='tempest-ServersTestManualDisk-115566997-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:29:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58fd103107724704ada2dddb2ed9734d',uuid=574ebdb2-135f-42aa-aa9a-4d986501daf4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.348 185484 DEBUG nova.network.os_vif_util [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Converting VIF {"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.349 185484 DEBUG nova.network.os_vif_util [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:05,bridge_name='br-int',has_traffic_filtering=True,id=a8863494-47b5-4610-a914-aacdead1041c,network=Network(57c7aa1f-1c4a-44de-8222-7196a10c62b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8863494-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.349 185484 DEBUG os_vif [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:05,bridge_name='br-int',has_traffic_filtering=True,id=a8863494-47b5-4610-a914-aacdead1041c,network=Network(57c7aa1f-1c4a-44de-8222-7196a10c62b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8863494-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.350 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.350 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.351 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.361 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.361 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8863494-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.362 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8863494-47, col_values=(('external_ids', {'iface-id': 'a8863494-47b5-4610-a914-aacdead1041c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:61:05', 'vm-uuid': '574ebdb2-135f-42aa-aa9a-4d986501daf4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.365 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:40 compute-0 NetworkManager[56191]: <info>  [1769542180.3664] manager: (tapa8863494-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.367 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.381 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.382 185484 INFO os_vif [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:05,bridge_name='br-int',has_traffic_filtering=True,id=a8863494-47b5-4610-a914-aacdead1041c,network=Network(57c7aa1f-1c4a-44de-8222-7196a10c62b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8863494-47')
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.527 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.528 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.528 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] No VIF found with MAC fa:16:3e:b8:61:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:29:40 compute-0 nova_compute[185480]: 2026-01-27 19:29:40.530 185484 INFO nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Using config drive
Jan 27 19:29:40 compute-0 sshd-session[249983]: Connection closed by authenticating user root 156.227.233.86 port 41142 [preauth]
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.060 185484 DEBUG nova.compute.manager [req-fc706444-97eb-4daa-8004-2455e21a86f6 req-13c43faa-3840-4803-aec8-11ca6a77b8aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.061 185484 DEBUG oslo_concurrency.lockutils [req-fc706444-97eb-4daa-8004-2455e21a86f6 req-13c43faa-3840-4803-aec8-11ca6a77b8aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "be961482-e05a-4655-96ea-7d4810738a3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.062 185484 DEBUG oslo_concurrency.lockutils [req-fc706444-97eb-4daa-8004-2455e21a86f6 req-13c43faa-3840-4803-aec8-11ca6a77b8aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.063 185484 DEBUG oslo_concurrency.lockutils [req-fc706444-97eb-4daa-8004-2455e21a86f6 req-13c43faa-3840-4803-aec8-11ca6a77b8aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.064 185484 DEBUG nova.compute.manager [req-fc706444-97eb-4daa-8004-2455e21a86f6 req-13c43faa-3840-4803-aec8-11ca6a77b8aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] No waiting events found dispatching network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.065 185484 WARNING nova.compute.manager [req-fc706444-97eb-4daa-8004-2455e21a86f6 req-13c43faa-3840-4803-aec8-11ca6a77b8aa bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received unexpected event network-vif-plugged-b0ee253c-f6f7-42b0-906e-b575d0104fbb for instance with vm_state active and task_state deleting.
Jan 27 19:29:41 compute-0 sshd-session[249988]: Connection closed by authenticating user root 156.227.233.86 port 56054 [preauth]
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.817 185484 INFO nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Creating config drive at /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk.config
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.833 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbxkefxhm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:29:41 compute-0 nova_compute[185480]: 2026-01-27 19:29:41.982 185484 DEBUG oslo_concurrency.processutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbxkefxhm" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:29:42 compute-0 kernel: tapa8863494-47: entered promiscuous mode
Jan 27 19:29:42 compute-0 NetworkManager[56191]: <info>  [1769542182.0645] manager: (tapa8863494-47): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.070 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:42 compute-0 ovn_controller[97647]: 2026-01-27T19:29:42Z|00104|binding|INFO|Claiming lport a8863494-47b5-4610-a914-aacdead1041c for this chassis.
Jan 27 19:29:42 compute-0 ovn_controller[97647]: 2026-01-27T19:29:42Z|00105|binding|INFO|a8863494-47b5-4610-a914-aacdead1041c: Claiming fa:16:3e:b8:61:05 10.100.0.12
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.074 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:42 compute-0 ovn_controller[97647]: 2026-01-27T19:29:42Z|00106|binding|INFO|Setting lport a8863494-47b5-4610-a914-aacdead1041c ovn-installed in OVS
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.088 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.096 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:42 compute-0 systemd-machined[156762]: New machine qemu-10-instance-0000000a.
Jan 27 19:29:42 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 27 19:29:42 compute-0 systemd-udevd[250010]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:29:42 compute-0 ovn_controller[97647]: 2026-01-27T19:29:42Z|00107|binding|INFO|Setting lport a8863494-47b5-4610-a914-aacdead1041c up in Southbound
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.169 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:61:05 10.100.0.12'], port_security=['fa:16:3e:b8:61:05 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '574ebdb2-135f-42aa-aa9a-4d986501daf4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57c7aa1f-1c4a-44de-8222-7196a10c62b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea88acf7f2624e02bf32ef2e9e9dc9df', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26651e32-354c-46ca-b9a0-a56ff4f18c16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b589698-2036-45ed-bef1-b88d6e58ffe0, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=a8863494-47b5-4610-a914-aacdead1041c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.171 106898 INFO neutron.agent.ovn.metadata.agent [-] Port a8863494-47b5-4610-a914-aacdead1041c in datapath 57c7aa1f-1c4a-44de-8222-7196a10c62b5 bound to our chassis
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.175 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 57c7aa1f-1c4a-44de-8222-7196a10c62b5
Jan 27 19:29:42 compute-0 NetworkManager[56191]: <info>  [1769542182.1879] device (tapa8863494-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:29:42 compute-0 NetworkManager[56191]: <info>  [1769542182.1916] device (tapa8863494-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.193 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[8f05f8c9-2c32-4651-88ed-ed9abfd89823]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.195 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap57c7aa1f-11 in ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.199 238834 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap57c7aa1f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.199 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[d0bcb886-b5c8-4404-b0bc-18dd7f6a9e68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.203 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[4edd4f4d-6e62-4b5a-ac7f-3f6322902668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.222 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b2e6d9-a417-48be-8f1f-adf39839280e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.264 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[22fc76df-8747-47d3-ac80-6dbd267e275e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_controller[97647]: 2026-01-27T19:29:42Z|00108|binding|INFO|Releasing lport 48c695f0-2fed-4bb1-9b53-847c4dc25e7f from this chassis (sb_readonly=0)
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.301 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[779ba6f4-7a7e-4a09-a2d0-378f11874a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.314 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[35138be9-15fd-4768-8456-a74761357758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 NetworkManager[56191]: <info>  [1769542182.3193] manager: (tap57c7aa1f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.373 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[ae099ef0-424f-43b6-bf08-c8c44cfa0fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.377 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f53d11-b492-4d7c-b6fa-282cb0b8d48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 NetworkManager[56191]: <info>  [1769542182.4126] device (tap57c7aa1f-10): carrier: link connected
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.412 185484 DEBUG nova.network.neutron [-] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.416 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[c084f4c2-07a5-4bfe-8d53-80bd136517de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.436 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3903d3-ed27-4729-b18d-37c158c34c61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57c7aa1f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e6:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529897, 'reachable_time': 34704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250045, 'error': None, 'target': 'ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.459 185484 INFO nova.compute.manager [-] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Took 5.95 seconds to deallocate network for instance.
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.462 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[92907612-58d6-4582-87e6-3e4eef332bee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:e6cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529897, 'tstamp': 529897}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250047, 'error': None, 'target': 'ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.488 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c0c444-421a-4159-ba29-a9ae6aad9b99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57c7aa1f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e6:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529897, 'reachable_time': 34704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250048, 'error': None, 'target': 'ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.529 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[2129fe3c-e597-4b5a-be7a-c693a937be99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.563 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.563 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.633 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[32647057-e67b-4cbd-a5ac-9c5f307569fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.644 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57c7aa1f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.644 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.644 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542182.6444237, 574ebdb2-135f-42aa-aa9a-4d986501daf4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.645 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57c7aa1f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.645 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] VM Started (Lifecycle Event)
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.648 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:42 compute-0 kernel: tap57c7aa1f-10: entered promiscuous mode
Jan 27 19:29:42 compute-0 NetworkManager[56191]: <info>  [1769542182.6500] manager: (tap57c7aa1f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.664 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap57c7aa1f-10, col_values=(('external_ids', {'iface-id': 'f533bce1-7138-481f-9d3f-5dd6aaad44e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.666 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:42 compute-0 ovn_controller[97647]: 2026-01-27T19:29:42Z|00109|binding|INFO|Releasing lport f533bce1-7138-481f-9d3f-5dd6aaad44e5 from this chassis (sb_readonly=0)
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.667 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.670 106898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57c7aa1f-1c4a-44de-8222-7196a10c62b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57c7aa1f-1c4a-44de-8222-7196a10c62b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.672 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d69da7-69b5-4f47-b370-15c533012ac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.673 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.674 106898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: global
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     log         /dev/log local0 debug
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     log-tag     haproxy-metadata-proxy-57c7aa1f-1c4a-44de-8222-7196a10c62b5
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     user        root
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     group       root
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     maxconn     1024
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     pidfile     /var/lib/neutron/external/pids/57c7aa1f-1c4a-44de-8222-7196a10c62b5.pid.haproxy
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     daemon
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: defaults
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     log global
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     mode http
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     option httplog
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     option dontlognull
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     option http-server-close
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     option forwardfor
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     retries                 3
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     timeout http-request    30s
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     timeout connect         30s
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     timeout client          32s
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     timeout server          32s
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     timeout http-keep-alive 30s
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: listen listener
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     bind 169.254.169.254:80
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:     http-request add-header X-OVN-Network-ID 57c7aa1f-1c4a-44de-8222-7196a10c62b5
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 19:29:42 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:42.676 106898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5', 'env', 'PROCESS_TAG=haproxy-57c7aa1f-1c4a-44de-8222-7196a10c62b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/57c7aa1f-1c4a-44de-8222-7196a10c62b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.685 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.688 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542182.6445878, 574ebdb2-135f-42aa-aa9a-4d986501daf4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.688 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] VM Paused (Lifecycle Event)
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.729 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.746 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.799 185484 DEBUG nova.compute.provider_tree [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:29:42 compute-0 sshd-session[249993]: Connection closed by authenticating user root 156.227.233.86 port 41264 [preauth]
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.816 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:29:42 compute-0 nova_compute[185480]: 2026-01-27 19:29:42.847 185484 DEBUG nova.scheduler.client.report [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:29:43 compute-0 nova_compute[185480]: 2026-01-27 19:29:43.017 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:43 compute-0 nova_compute[185480]: 2026-01-27 19:29:43.095 185484 INFO nova.scheduler.client.report [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Deleted allocations for instance be961482-e05a-4655-96ea-7d4810738a3c
Jan 27 19:29:43 compute-0 podman[250087]: 2026-01-27 19:29:43.096906891 +0000 UTC m=+0.026509626 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 19:29:43 compute-0 podman[250087]: 2026-01-27 19:29:43.200139624 +0000 UTC m=+0.129742339 container create b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 19:29:43 compute-0 nova_compute[185480]: 2026-01-27 19:29:43.227 185484 DEBUG oslo_concurrency.lockutils [None req-c12fd9c5-a156-467b-93e9-2b35c26ab395 87274dd877fa476fa885a5665ba052ec 4102701402ff4f059dd67182960e5b64 - - default default] Lock "be961482-e05a-4655-96ea-7d4810738a3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:43 compute-0 systemd[1]: Started libpod-conmon-b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2.scope.
Jan 27 19:29:43 compute-0 systemd[1]: Started libcrun container.
Jan 27 19:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555965bb3b595d72944cafef64234778ace073f6377333104bcece89e1725430/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 19:29:43 compute-0 nova_compute[185480]: 2026-01-27 19:29:43.321 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:43 compute-0 podman[250087]: 2026-01-27 19:29:43.333252236 +0000 UTC m=+0.262855001 container init b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:29:43 compute-0 podman[250101]: 2026-01-27 19:29:43.337003189 +0000 UTC m=+0.104915076 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi)
Jan 27 19:29:43 compute-0 podman[250100]: 2026-01-27 19:29:43.341033619 +0000 UTC m=+0.108055773 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:29:43 compute-0 podman[250087]: 2026-01-27 19:29:43.347304074 +0000 UTC m=+0.276906809 container start b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:29:43 compute-0 neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5[250120]: [NOTICE]   (250147) : New worker (250149) forked
Jan 27 19:29:43 compute-0 neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5[250120]: [NOTICE]   (250147) : Loading success.
Jan 27 19:29:43 compute-0 sshd-session[250074]: Connection closed by authenticating user root 156.227.233.86 port 54852 [preauth]
Jan 27 19:29:43 compute-0 nova_compute[185480]: 2026-01-27 19:29:43.844 185484 DEBUG nova.compute.manager [req-c7eee7c0-2352-44c3-a91a-f96417b8690d req-6e7c6b94-5f57-4a4b-8a1f-16880f39600d bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Received event network-vif-deleted-b0ee253c-f6f7-42b0-906e-b575d0104fbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:44 compute-0 podman[250160]: 2026-01-27 19:29:44.336766267 +0000 UTC m=+0.106663689 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, version=9.4, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 19:29:44 compute-0 nova_compute[185480]: 2026-01-27 19:29:44.665 185484 DEBUG nova.network.neutron [req-8f8dadcf-2f8c-4d25-bb91-a1d95bb16e45 req-32093ecb-8bbe-41af-b574-0edb3ad0217f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Updated VIF entry in instance network info cache for port a8863494-47b5-4610-a914-aacdead1041c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:29:44 compute-0 nova_compute[185480]: 2026-01-27 19:29:44.672 185484 DEBUG nova.network.neutron [req-8f8dadcf-2f8c-4d25-bb91-a1d95bb16e45 req-32093ecb-8bbe-41af-b574-0edb3ad0217f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Updating instance_info_cache with network_info: [{"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:44 compute-0 nova_compute[185480]: 2026-01-27 19:29:44.808 185484 DEBUG oslo_concurrency.lockutils [req-8f8dadcf-2f8c-4d25-bb91-a1d95bb16e45 req-32093ecb-8bbe-41af-b574-0edb3ad0217f bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:44 compute-0 sshd-session[250158]: Connection closed by authenticating user root 156.227.233.86 port 39282 [preauth]
Jan 27 19:29:44 compute-0 ovn_controller[97647]: 2026-01-27T19:29:44Z|00110|memory|INFO|peak resident set size grew 51% in last 2636.2 seconds, from 16256 kB to 24488 kB
Jan 27 19:29:44 compute-0 ovn_controller[97647]: 2026-01-27T19:29:44Z|00111|memory|INFO|idl-cells-OVN_Southbound:10750 idl-cells-Open_vSwitch:813 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:357 lflow-cache-entries-cache-matches:294 lflow-cache-size-KB:1511 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:625 ofctrl_installed_flow_usage-KB:456 ofctrl_sb_flow_ref_usage-KB:237
Jan 27 19:29:45 compute-0 nova_compute[185480]: 2026-01-27 19:29:45.367 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:45 compute-0 nova_compute[185480]: 2026-01-27 19:29:45.399 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:45 compute-0 sshd-session[250179]: Connection closed by authenticating user root 156.227.233.86 port 52366 [preauth]
Jan 27 19:29:46 compute-0 sshd-session[250181]: Connection closed by authenticating user root 156.227.233.86 port 36306 [preauth]
Jan 27 19:29:47 compute-0 nova_compute[185480]: 2026-01-27 19:29:47.814 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:47 compute-0 sshd-session[250183]: Connection closed by authenticating user root 156.227.233.86 port 51662 [preauth]
Jan 27 19:29:48 compute-0 nova_compute[185480]: 2026-01-27 19:29:48.324 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:48 compute-0 sshd-session[250185]: Connection closed by authenticating user root 156.227.233.86 port 37614 [preauth]
Jan 27 19:29:49 compute-0 sshd-session[250187]: Connection closed by authenticating user root 156.227.233.86 port 50046 [preauth]
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.373 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.523 185484 DEBUG nova.compute.manager [req-d9e659e2-e8a2-4787-bb38-c83b85a14521 req-ca08f754-f0b9-4141-a355-9d1624c98801 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Received event network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.525 185484 DEBUG oslo_concurrency.lockutils [req-d9e659e2-e8a2-4787-bb38-c83b85a14521 req-ca08f754-f0b9-4141-a355-9d1624c98801 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.526 185484 DEBUG oslo_concurrency.lockutils [req-d9e659e2-e8a2-4787-bb38-c83b85a14521 req-ca08f754-f0b9-4141-a355-9d1624c98801 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.527 185484 DEBUG oslo_concurrency.lockutils [req-d9e659e2-e8a2-4787-bb38-c83b85a14521 req-ca08f754-f0b9-4141-a355-9d1624c98801 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.528 185484 DEBUG nova.compute.manager [req-d9e659e2-e8a2-4787-bb38-c83b85a14521 req-ca08f754-f0b9-4141-a355-9d1624c98801 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Processing event network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.529 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.544 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542190.5355806, 574ebdb2-135f-42aa-aa9a-4d986501daf4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.545 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] VM Resumed (Lifecycle Event)
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.547 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.555 185484 INFO nova.virt.libvirt.driver [-] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Instance spawned successfully.
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.557 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.581 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.593 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.600 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.601 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.601 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.601 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.602 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.602 185484 DEBUG nova.virt.libvirt.driver [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.629 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.691 185484 INFO nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Took 22.27 seconds to spawn the instance on the hypervisor.
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.691 185484 DEBUG nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.751 185484 INFO nova.compute.manager [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Took 23.16 seconds to build instance.
Jan 27 19:29:50 compute-0 nova_compute[185480]: 2026-01-27 19:29:50.776 185484 DEBUG oslo_concurrency.lockutils [None req-c200f7c6-e4d7-401c-950b-76df1586b6d2 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:50 compute-0 sshd-session[250190]: Connection closed by authenticating user root 156.227.233.86 port 35574 [preauth]
Jan 27 19:29:51 compute-0 nova_compute[185480]: 2026-01-27 19:29:51.162 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769542176.1605864, be961482-e05a-4655-96ea-7d4810738a3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:29:51 compute-0 nova_compute[185480]: 2026-01-27 19:29:51.162 185484 INFO nova.compute.manager [-] [instance: be961482-e05a-4655-96ea-7d4810738a3c] VM Stopped (Lifecycle Event)
Jan 27 19:29:51 compute-0 nova_compute[185480]: 2026-01-27 19:29:51.187 185484 DEBUG nova.compute.manager [None req-70f30b3e-3f16-47ec-9e4f-e290b3c45609 - - - - - -] [instance: be961482-e05a-4655-96ea-7d4810738a3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:29:51 compute-0 podman[250194]: 2026-01-27 19:29:51.298460544 +0000 UTC m=+0.076528204 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350)
Jan 27 19:29:51 compute-0 sshd-session[250192]: Connection closed by authenticating user root 156.227.233.86 port 49996 [preauth]
Jan 27 19:29:52 compute-0 nova_compute[185480]: 2026-01-27 19:29:52.674 185484 DEBUG nova.compute.manager [req-de607e5e-fd06-4480-982e-35ef6edcf757 req-9e512bc9-8662-4651-837c-6f5b94e09a41 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Received event network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:52 compute-0 nova_compute[185480]: 2026-01-27 19:29:52.674 185484 DEBUG oslo_concurrency.lockutils [req-de607e5e-fd06-4480-982e-35ef6edcf757 req-9e512bc9-8662-4651-837c-6f5b94e09a41 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:52 compute-0 nova_compute[185480]: 2026-01-27 19:29:52.674 185484 DEBUG oslo_concurrency.lockutils [req-de607e5e-fd06-4480-982e-35ef6edcf757 req-9e512bc9-8662-4651-837c-6f5b94e09a41 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:52 compute-0 nova_compute[185480]: 2026-01-27 19:29:52.675 185484 DEBUG oslo_concurrency.lockutils [req-de607e5e-fd06-4480-982e-35ef6edcf757 req-9e512bc9-8662-4651-837c-6f5b94e09a41 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:52 compute-0 nova_compute[185480]: 2026-01-27 19:29:52.675 185484 DEBUG nova.compute.manager [req-de607e5e-fd06-4480-982e-35ef6edcf757 req-9e512bc9-8662-4651-837c-6f5b94e09a41 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] No waiting events found dispatching network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:29:52 compute-0 nova_compute[185480]: 2026-01-27 19:29:52.675 185484 WARNING nova.compute.manager [req-de607e5e-fd06-4480-982e-35ef6edcf757 req-9e512bc9-8662-4651-837c-6f5b94e09a41 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Received unexpected event network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c for instance with vm_state active and task_state None.
Jan 27 19:29:52 compute-0 sshd-session[250212]: Connection closed by authenticating user root 156.227.233.86 port 37280 [preauth]
Jan 27 19:29:53 compute-0 nova_compute[185480]: 2026-01-27 19:29:53.330 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:53 compute-0 nova_compute[185480]: 2026-01-27 19:29:53.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:29:53 compute-0 nova_compute[185480]: 2026-01-27 19:29:53.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:29:53 compute-0 nova_compute[185480]: 2026-01-27 19:29:53.517 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:29:53 compute-0 sshd-session[250221]: Connection closed by authenticating user root 156.227.233.86 port 50762 [preauth]
Jan 27 19:29:54 compute-0 nova_compute[185480]: 2026-01-27 19:29:54.311 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:54 compute-0 nova_compute[185480]: 2026-01-27 19:29:54.311 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:54 compute-0 nova_compute[185480]: 2026-01-27 19:29:54.311 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:29:54 compute-0 nova_compute[185480]: 2026-01-27 19:29:54.312 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid f79ddcc5-ee21-43e8-9d0d-60476a477361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:29:54 compute-0 sshd-session[250227]: Connection closed by authenticating user root 156.227.233.86 port 33934 [preauth]
Jan 27 19:29:55 compute-0 nova_compute[185480]: 2026-01-27 19:29:55.109 185484 DEBUG nova.compute.manager [req-107e110b-eea7-4858-8fc4-5ca9534e5fb2 req-b5237cf5-78c4-46bb-b5e6-22632270ad90 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Received event network-changed-a8863494-47b5-4610-a914-aacdead1041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:29:55 compute-0 nova_compute[185480]: 2026-01-27 19:29:55.110 185484 DEBUG nova.compute.manager [req-107e110b-eea7-4858-8fc4-5ca9534e5fb2 req-b5237cf5-78c4-46bb-b5e6-22632270ad90 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Refreshing instance network info cache due to event network-changed-a8863494-47b5-4610-a914-aacdead1041c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:29:55 compute-0 nova_compute[185480]: 2026-01-27 19:29:55.110 185484 DEBUG oslo_concurrency.lockutils [req-107e110b-eea7-4858-8fc4-5ca9534e5fb2 req-b5237cf5-78c4-46bb-b5e6-22632270ad90 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:29:55 compute-0 nova_compute[185480]: 2026-01-27 19:29:55.110 185484 DEBUG oslo_concurrency.lockutils [req-107e110b-eea7-4858-8fc4-5ca9534e5fb2 req-b5237cf5-78c4-46bb-b5e6-22632270ad90 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:29:55 compute-0 nova_compute[185480]: 2026-01-27 19:29:55.111 185484 DEBUG nova.network.neutron [req-107e110b-eea7-4858-8fc4-5ca9534e5fb2 req-b5237cf5-78c4-46bb-b5e6-22632270ad90 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Refreshing network info cache for port a8863494-47b5-4610-a914-aacdead1041c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:29:55 compute-0 nova_compute[185480]: 2026-01-27 19:29:55.377 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:55 compute-0 ovn_controller[97647]: 2026-01-27T19:29:55Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:a4:15 10.100.0.8
Jan 27 19:29:55 compute-0 ovn_controller[97647]: 2026-01-27T19:29:55Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:a4:15 10.100.0.8
Jan 27 19:29:55 compute-0 sshd-session[250229]: Connection closed by authenticating user root 156.227.233.86 port 44066 [preauth]
Jan 27 19:29:56 compute-0 nova_compute[185480]: 2026-01-27 19:29:56.053 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:56 compute-0 sshd-session[250231]: Connection closed by authenticating user root 156.227.233.86 port 56820 [preauth]
Jan 27 19:29:57 compute-0 nova_compute[185480]: 2026-01-27 19:29:57.434 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updating instance_info_cache with network_info: [{"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:57 compute-0 sshd-session[250233]: Connection closed by authenticating user root 156.227.233.86 port 42140 [preauth]
Jan 27 19:29:57 compute-0 nova_compute[185480]: 2026-01-27 19:29:57.978 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:57 compute-0 nova_compute[185480]: 2026-01-27 19:29:57.979 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.105 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "574ebdb2-135f-42aa-aa9a-4d986501daf4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.106 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.107 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.108 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.108 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.111 185484 INFO nova.compute.manager [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Terminating instance
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.113 185484 DEBUG nova.compute.manager [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 19:29:58 compute-0 kernel: tapa8863494-47 (unregistering): left promiscuous mode
Jan 27 19:29:58 compute-0 NetworkManager[56191]: <info>  [1769542198.1578] device (tapa8863494-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 19:29:58 compute-0 ovn_controller[97647]: 2026-01-27T19:29:58Z|00112|binding|INFO|Releasing lport a8863494-47b5-4610-a914-aacdead1041c from this chassis (sb_readonly=0)
Jan 27 19:29:58 compute-0 ovn_controller[97647]: 2026-01-27T19:29:58Z|00113|binding|INFO|Setting lport a8863494-47b5-4610-a914-aacdead1041c down in Southbound
Jan 27 19:29:58 compute-0 ovn_controller[97647]: 2026-01-27T19:29:58Z|00114|binding|INFO|Removing iface tapa8863494-47 ovn-installed in OVS
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.175 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.178 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.211 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:61:05 10.100.0.12'], port_security=['fa:16:3e:b8:61:05 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '574ebdb2-135f-42aa-aa9a-4d986501daf4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57c7aa1f-1c4a-44de-8222-7196a10c62b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea88acf7f2624e02bf32ef2e9e9dc9df', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26651e32-354c-46ca-b9a0-a56ff4f18c16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b589698-2036-45ed-bef1-b88d6e58ffe0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=a8863494-47b5-4610-a914-aacdead1041c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.211 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.214 106898 INFO neutron.agent.ovn.metadata.agent [-] Port a8863494-47b5-4610-a914-aacdead1041c in datapath 57c7aa1f-1c4a-44de-8222-7196a10c62b5 unbound from our chassis
Jan 27 19:29:58 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 27 19:29:58 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 8.359s CPU time.
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.219 106898 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57c7aa1f-1c4a-44de-8222-7196a10c62b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.220 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[9189be67-18b7-4eee-83f5-bff6d2a78939]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.225 106898 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5 namespace which is not needed anymore
Jan 27 19:29:58 compute-0 systemd-machined[156762]: Machine qemu-10-instance-0000000a terminated.
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.332 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.350 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.358 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 podman[250237]: 2026-01-27 19:29:58.363314042 +0000 UTC m=+0.164972101 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260126, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.404 185484 INFO nova.virt.libvirt.driver [-] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Instance destroyed successfully.
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.405 185484 DEBUG nova.objects.instance [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lazy-loading 'resources' on Instance uuid 574ebdb2-135f-42aa-aa9a-4d986501daf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:29:58 compute-0 neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5[250120]: [NOTICE]   (250147) : haproxy version is 2.8.14-c23fe91
Jan 27 19:29:58 compute-0 neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5[250120]: [NOTICE]   (250147) : path to executable is /usr/sbin/haproxy
Jan 27 19:29:58 compute-0 neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5[250120]: [WARNING]  (250147) : Exiting Master process...
Jan 27 19:29:58 compute-0 neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5[250120]: [WARNING]  (250147) : Exiting Master process...
Jan 27 19:29:58 compute-0 neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5[250120]: [ALERT]    (250147) : Current worker (250149) exited with code 143 (Terminated)
Jan 27 19:29:58 compute-0 neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5[250120]: [WARNING]  (250147) : All workers exited. Exiting... (0)
Jan 27 19:29:58 compute-0 systemd[1]: libpod-b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2.scope: Deactivated successfully.
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.504 185484 DEBUG nova.virt.libvirt.vif [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:29:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-92882849',display_name='tempest-ServersTestManualDisk-server-92882849',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-92882849',id=10,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWH67S/CDsqtfVH48IJT4mJqV7UbvuQiNWsX6gJu/0ev8/JNVPO6ly3ja2il3UJmo/KTtjVTzgT3BzsV3CcfQZDIcAIi6hS50gaDqoMoirhUVpAZpFINMdSg4KESVJT2w==',key_name='tempest-keypair-61082741',keypairs=<?>,launch_index=0,launched_at=2026-01-27T19:29:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea88acf7f2624e02bf32ef2e9e9dc9df',ramdisk_id='',reservation_id='r-89xfakeg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-115566997',owner_user_name='tempest-ServersTestManualDisk-115566997-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T19:29:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58fd103107724704ada2dddb2ed9734d',uuid=574ebdb2-135f-42aa-aa9a-4d986501daf4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.505 185484 DEBUG nova.network.os_vif_util [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Converting VIF {"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.506 185484 DEBUG nova.network.os_vif_util [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:05,bridge_name='br-int',has_traffic_filtering=True,id=a8863494-47b5-4610-a914-aacdead1041c,network=Network(57c7aa1f-1c4a-44de-8222-7196a10c62b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8863494-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.507 185484 DEBUG os_vif [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:05,bridge_name='br-int',has_traffic_filtering=True,id=a8863494-47b5-4610-a914-aacdead1041c,network=Network(57c7aa1f-1c4a-44de-8222-7196a10c62b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8863494-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 19:29:58 compute-0 podman[250285]: 2026-01-27 19:29:58.509452626 +0000 UTC m=+0.130895158 container died b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.510 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.510 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8863494-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.513 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.517 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.521 185484 INFO os_vif [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:61:05,bridge_name='br-int',has_traffic_filtering=True,id=a8863494-47b5-4610-a914-aacdead1041c,network=Network(57c7aa1f-1c4a-44de-8222-7196a10c62b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8863494-47')
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.522 185484 INFO nova.virt.libvirt.driver [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Deleting instance files /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4_del
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.523 185484 INFO nova.virt.libvirt.driver [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Deletion of /var/lib/nova/instances/574ebdb2-135f-42aa-aa9a-4d986501daf4_del complete
Jan 27 19:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2-userdata-shm.mount: Deactivated successfully.
Jan 27 19:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-555965bb3b595d72944cafef64234778ace073f6377333104bcece89e1725430-merged.mount: Deactivated successfully.
Jan 27 19:29:58 compute-0 podman[250285]: 2026-01-27 19:29:58.598345275 +0000 UTC m=+0.219787807 container cleanup b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 19:29:58 compute-0 systemd[1]: libpod-conmon-b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2.scope: Deactivated successfully.
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.754 185484 INFO nova.compute.manager [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Took 0.64 seconds to destroy the instance on the hypervisor.
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.755 185484 DEBUG oslo.service.loopingcall [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.756 185484 DEBUG nova.compute.manager [-] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.756 185484 DEBUG nova.network.neutron [-] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 19:29:58 compute-0 podman[250327]: 2026-01-27 19:29:58.791510553 +0000 UTC m=+0.153117808 container remove b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.801 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[43f150a6-6c4b-4688-b74f-5ca161d710f1]: (4, ('Tue Jan 27 07:29:58 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5 (b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2)\nb877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2\nTue Jan 27 07:29:58 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5 (b877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2)\nb877ca86494f3461b0947161442d7f91f93eedd6efcdc646eb35efed9b8dc9c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.804 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[3e46539c-11f3-45f9-b6b4-dcb32f1eba91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.806 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57c7aa1f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:29:58 compute-0 kernel: tap57c7aa1f-10: left promiscuous mode
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.809 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 nova_compute[185480]: 2026-01-27 19:29:58.824 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.829 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[c36df1ab-3ab2-41eb-a5b5-b97875036581]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.853 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1ec2df-9aa4-4082-9619-f1571d81c6a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.855 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[c060f1eb-15f8-4526-acf5-a6c229dc0a25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.873 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[a52753f6-d6d7-4ced-8ff3-7d65dbf465e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529886, 'reachable_time': 39020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250341, 'error': None, 'target': 'ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.877 107353 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-57c7aa1f-1c4a-44de-8222-7196a10c62b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 19:29:58 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:29:58.878 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[9463f6e4-151c-46ed-81b3-fb242aff45c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:29:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d57c7aa1f\x2d1c4a\x2d44de\x2d8222\x2d7196a10c62b5.mount: Deactivated successfully.
Jan 27 19:29:58 compute-0 sshd-session[250235]: Connection closed by authenticating user root 156.227.233.86 port 54964 [preauth]
Jan 27 19:29:59 compute-0 nova_compute[185480]: 2026-01-27 19:29:59.535 185484 DEBUG nova.network.neutron [req-107e110b-eea7-4858-8fc4-5ca9534e5fb2 req-b5237cf5-78c4-46bb-b5e6-22632270ad90 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Updated VIF entry in instance network info cache for port a8863494-47b5-4610-a914-aacdead1041c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:29:59 compute-0 nova_compute[185480]: 2026-01-27 19:29:59.535 185484 DEBUG nova.network.neutron [req-107e110b-eea7-4858-8fc4-5ca9534e5fb2 req-b5237cf5-78c4-46bb-b5e6-22632270ad90 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Updating instance_info_cache with network_info: [{"id": "a8863494-47b5-4610-a914-aacdead1041c", "address": "fa:16:3e:b8:61:05", "network": {"id": "57c7aa1f-1c4a-44de-8222-7196a10c62b5", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1139789201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea88acf7f2624e02bf32ef2e9e9dc9df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8863494-47", "ovs_interfaceid": "a8863494-47b5-4610-a914-aacdead1041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:29:59 compute-0 nova_compute[185480]: 2026-01-27 19:29:59.612 185484 DEBUG oslo_concurrency.lockutils [req-107e110b-eea7-4858-8fc4-5ca9534e5fb2 req-b5237cf5-78c4-46bb-b5e6-22632270ad90 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-574ebdb2-135f-42aa-aa9a-4d986501daf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:29:59 compute-0 podman[201378]: time="2026-01-27T19:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:29:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:29:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 27 19:30:00 compute-0 sshd-session[250342]: Connection closed by authenticating user root 156.227.233.86 port 40832 [preauth]
Jan 27 19:30:01 compute-0 podman[250348]: 2026-01-27 19:30:01.317890209 +0000 UTC m=+0.079839886 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:30:01 compute-0 podman[250346]: 2026-01-27 19:30:01.318228778 +0000 UTC m=+0.087612649 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:30:01 compute-0 sshd-session[250344]: Connection closed by authenticating user root 156.227.233.86 port 32778 [preauth]
Jan 27 19:30:01 compute-0 podman[250347]: 2026-01-27 19:30:01.391344036 +0000 UTC m=+0.155689582 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 19:30:01 compute-0 openstack_network_exporter[204477]: ERROR   19:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:30:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:30:01 compute-0 openstack_network_exporter[204477]: ERROR   19:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:30:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:30:01 compute-0 nova_compute[185480]: 2026-01-27 19:30:01.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:02 compute-0 sshd-session[250410]: Connection closed by authenticating user root 156.227.233.86 port 45676 [preauth]
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.679 185484 DEBUG nova.network.neutron [-] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.875 185484 DEBUG nova.compute.manager [req-24bf2736-a446-4f4b-9f9c-c341df5f98bb req-a6c142d1-9971-4d20-8dc5-c67080896db4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Received event network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.876 185484 DEBUG oslo_concurrency.lockutils [req-24bf2736-a446-4f4b-9f9c-c341df5f98bb req-a6c142d1-9971-4d20-8dc5-c67080896db4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.876 185484 DEBUG oslo_concurrency.lockutils [req-24bf2736-a446-4f4b-9f9c-c341df5f98bb req-a6c142d1-9971-4d20-8dc5-c67080896db4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.876 185484 DEBUG oslo_concurrency.lockutils [req-24bf2736-a446-4f4b-9f9c-c341df5f98bb req-a6c142d1-9971-4d20-8dc5-c67080896db4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.877 185484 DEBUG nova.compute.manager [req-24bf2736-a446-4f4b-9f9c-c341df5f98bb req-a6c142d1-9971-4d20-8dc5-c67080896db4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] No waiting events found dispatching network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.877 185484 WARNING nova.compute.manager [req-24bf2736-a446-4f4b-9f9c-c341df5f98bb req-a6c142d1-9971-4d20-8dc5-c67080896db4 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Received unexpected event network-vif-plugged-a8863494-47b5-4610-a914-aacdead1041c for instance with vm_state active and task_state deleting.
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.887 185484 DEBUG nova.compute.manager [req-66a319fd-0de8-46d3-97df-36e88024f9cd req-35864ae8-52c7-4bd8-99f9-6507c8194e81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Received event network-vif-deleted-a8863494-47b5-4610-a914-aacdead1041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.888 185484 INFO nova.compute.manager [req-66a319fd-0de8-46d3-97df-36e88024f9cd req-35864ae8-52c7-4bd8-99f9-6507c8194e81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Neutron deleted interface a8863494-47b5-4610-a914-aacdead1041c; detaching it from the instance and deleting it from the info cache
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.888 185484 DEBUG nova.network.neutron [req-66a319fd-0de8-46d3-97df-36e88024f9cd req-35864ae8-52c7-4bd8-99f9-6507c8194e81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:30:02 compute-0 nova_compute[185480]: 2026-01-27 19:30:02.996 185484 INFO nova.compute.manager [-] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Took 4.24 seconds to deallocate network for instance.
Jan 27 19:30:03 compute-0 nova_compute[185480]: 2026-01-27 19:30:03.004 185484 DEBUG nova.compute.manager [req-66a319fd-0de8-46d3-97df-36e88024f9cd req-35864ae8-52c7-4bd8-99f9-6507c8194e81 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Detach interface failed, port_id=a8863494-47b5-4610-a914-aacdead1041c, reason: Instance 574ebdb2-135f-42aa-aa9a-4d986501daf4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 27 19:30:03 compute-0 nova_compute[185480]: 2026-01-27 19:30:03.310 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:03 compute-0 nova_compute[185480]: 2026-01-27 19:30:03.311 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:03 compute-0 nova_compute[185480]: 2026-01-27 19:30:03.334 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:03 compute-0 sshd-session[250412]: Connection closed by authenticating user root 156.227.233.86 port 59110 [preauth]
Jan 27 19:30:03 compute-0 nova_compute[185480]: 2026-01-27 19:30:03.514 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:03 compute-0 nova_compute[185480]: 2026-01-27 19:30:03.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:03 compute-0 nova_compute[185480]: 2026-01-27 19:30:03.702 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:04 compute-0 sshd-session[250414]: Connection closed by authenticating user root 156.227.233.86 port 43296 [preauth]
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.361 185484 DEBUG nova.compute.provider_tree [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.408 185484 DEBUG nova.scheduler.client.report [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.490 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.492 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.492 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.492 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.602 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.685 185484 INFO nova.scheduler.client.report [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Deleted allocations for instance 574ebdb2-135f-42aa-aa9a-4d986501daf4
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.695 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.696 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:30:04 compute-0 nova_compute[185480]: 2026-01-27 19:30:04.759 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.230 185484 DEBUG oslo_concurrency.lockutils [None req-f5743fe0-41d1-4dc2-b92d-b82a02337409 58fd103107724704ada2dddb2ed9734d ea88acf7f2624e02bf32ef2e9e9dc9df - - default default] Lock "574ebdb2-135f-42aa-aa9a-4d986501daf4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.337 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.338 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=72.35139465332031GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.338 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.339 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:05 compute-0 sshd-session[250417]: Connection closed by authenticating user root 156.227.233.86 port 60448 [preauth]
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.525 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance f79ddcc5-ee21-43e8-9d0d-60476a477361 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.526 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.526 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.604 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.626 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.895 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:30:05 compute-0 nova_compute[185480]: 2026-01-27 19:30:05.896 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:05 compute-0 ovn_controller[97647]: 2026-01-27T19:30:05Z|00115|binding|INFO|Releasing lport 48c695f0-2fed-4bb1-9b53-847c4dc25e7f from this chassis (sb_readonly=0)
Jan 27 19:30:06 compute-0 nova_compute[185480]: 2026-01-27 19:30:06.108 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:06 compute-0 sshd-session[250425]: Connection closed by authenticating user root 156.227.233.86 port 45820 [preauth]
Jan 27 19:30:06 compute-0 nova_compute[185480]: 2026-01-27 19:30:06.892 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:06 compute-0 nova_compute[185480]: 2026-01-27 19:30:06.893 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:06 compute-0 nova_compute[185480]: 2026-01-27 19:30:06.894 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:30:07 compute-0 sshd-session[250427]: Connection closed by authenticating user root 156.227.233.86 port 59274 [preauth]
Jan 27 19:30:07 compute-0 nova_compute[185480]: 2026-01-27 19:30:07.409 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:07 compute-0 nova_compute[185480]: 2026-01-27 19:30:07.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:07 compute-0 nova_compute[185480]: 2026-01-27 19:30:07.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:08 compute-0 nova_compute[185480]: 2026-01-27 19:30:08.338 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:08 compute-0 sshd-session[250429]: Connection closed by authenticating user root 156.227.233.86 port 45978 [preauth]
Jan 27 19:30:08 compute-0 nova_compute[185480]: 2026-01-27 19:30:08.516 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:09 compute-0 sshd-session[250431]: Connection closed by authenticating user root 156.227.233.86 port 32842 [preauth]
Jan 27 19:30:10 compute-0 nova_compute[185480]: 2026-01-27 19:30:10.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:11 compute-0 sshd-session[250433]: Connection closed by authenticating user root 156.227.233.86 port 58154 [preauth]
Jan 27 19:30:11 compute-0 nova_compute[185480]: 2026-01-27 19:30:11.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:11 compute-0 nova_compute[185480]: 2026-01-27 19:30:11.719 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:12 compute-0 sshd-session[250435]: Connection closed by authenticating user root 156.227.233.86 port 44362 [preauth]
Jan 27 19:30:13 compute-0 nova_compute[185480]: 2026-01-27 19:30:13.343 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:13 compute-0 sshd-session[250437]: Connection closed by authenticating user root 156.227.233.86 port 56906 [preauth]
Jan 27 19:30:13 compute-0 nova_compute[185480]: 2026-01-27 19:30:13.402 185484 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769542198.4006622, 574ebdb2-135f-42aa-aa9a-4d986501daf4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:30:13 compute-0 nova_compute[185480]: 2026-01-27 19:30:13.403 185484 INFO nova.compute.manager [-] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] VM Stopped (Lifecycle Event)
Jan 27 19:30:13 compute-0 nova_compute[185480]: 2026-01-27 19:30:13.519 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:14 compute-0 nova_compute[185480]: 2026-01-27 19:30:14.070 185484 DEBUG nova.compute.manager [None req-c852e9ab-03c4-482b-8245-31822b41323d - - - - - -] [instance: 574ebdb2-135f-42aa-aa9a-4d986501daf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:30:14 compute-0 sshd-session[250441]: Invalid user sol from 45.148.10.240 port 55720
Jan 27 19:30:14 compute-0 sshd-session[250441]: Connection closed by invalid user sol 45.148.10.240 port 55720 [preauth]
Jan 27 19:30:14 compute-0 podman[250443]: 2026-01-27 19:30:14.358700765 +0000 UTC m=+0.122435699 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 19:30:14 compute-0 podman[250444]: 2026-01-27 19:30:14.369561314 +0000 UTC m=+0.126770507 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 19:30:14 compute-0 sshd-session[250439]: Connection closed by authenticating user root 156.227.233.86 port 39182 [preauth]
Jan 27 19:30:14 compute-0 podman[250485]: 2026-01-27 19:30:14.498299168 +0000 UTC m=+0.090780667 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=kepler, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, version=9.4, build-date=2024-09-18T21:23:30)
Jan 27 19:30:15 compute-0 sshd-session[250504]: Connection closed by authenticating user root 156.227.233.86 port 52818 [preauth]
Jan 27 19:30:16 compute-0 nova_compute[185480]: 2026-01-27 19:30:16.197 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Acquiring lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:16 compute-0 nova_compute[185480]: 2026-01-27 19:30:16.198 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:16 compute-0 nova_compute[185480]: 2026-01-27 19:30:16.219 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 19:30:16 compute-0 sshd-session[250506]: Connection closed by authenticating user root 156.227.233.86 port 36736 [preauth]
Jan 27 19:30:17 compute-0 nova_compute[185480]: 2026-01-27 19:30:17.165 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:17 compute-0 nova_compute[185480]: 2026-01-27 19:30:17.166 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:17 compute-0 nova_compute[185480]: 2026-01-27 19:30:17.370 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 19:30:17 compute-0 nova_compute[185480]: 2026-01-27 19:30:17.370 185484 INFO nova.compute.claims [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 19:30:17 compute-0 sshd-session[250508]: Connection closed by authenticating user root 156.227.233.86 port 50658 [preauth]
Jan 27 19:30:18 compute-0 nova_compute[185480]: 2026-01-27 19:30:18.346 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:18 compute-0 sshd-session[250510]: Connection closed by authenticating user root 156.227.233.86 port 40188 [preauth]
Jan 27 19:30:18 compute-0 nova_compute[185480]: 2026-01-27 19:30:18.522 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:18 compute-0 nova_compute[185480]: 2026-01-27 19:30:18.757 185484 DEBUG nova.compute.provider_tree [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:30:19 compute-0 nova_compute[185480]: 2026-01-27 19:30:19.403 185484 DEBUG nova.scheduler.client.report [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:30:19 compute-0 sshd-session[250512]: Connection closed by authenticating user root 156.227.233.86 port 53982 [preauth]
Jan 27 19:30:20 compute-0 nova_compute[185480]: 2026-01-27 19:30:20.479 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:20 compute-0 nova_compute[185480]: 2026-01-27 19:30:20.481 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 19:30:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:30:20.542 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:30:20.543 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:30:20.544 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:20 compute-0 sshd-session[250514]: Connection closed by authenticating user root 156.227.233.86 port 38486 [preauth]
Jan 27 19:30:21 compute-0 sshd-session[250516]: Connection closed by authenticating user root 156.227.233.86 port 59864 [preauth]
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.313 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.314 185484 DEBUG nova.network.neutron [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 19:30:22 compute-0 podman[250520]: 2026-01-27 19:30:22.322307248 +0000 UTC m=+0.092484139 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.809 185484 INFO nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.838 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 19:30:22 compute-0 sshd-session[250518]: Connection closed by authenticating user root 156.227.233.86 port 44338 [preauth]
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.980 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.983 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.984 185484 INFO nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Creating image(s)
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.985 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Acquiring lock "/var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.985 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "/var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:22 compute-0 nova_compute[185480]: 2026-01-27 19:30:22.986 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "/var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.022 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.133 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.135 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Acquiring lock "69104a6fcf619df4c492b27a202c23b5821c0e32" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.136 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.155 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.236 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.237 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.285 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32,backing_fmt=raw /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.286 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "69104a6fcf619df4c492b27a202c23b5821c0e32" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.287 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.348 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.399 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/69104a6fcf619df4c492b27a202c23b5821c0e32 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.400 185484 DEBUG nova.virt.disk.api [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Checking if we can resize image /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.400 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.476 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.479 185484 DEBUG nova.virt.disk.api [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Cannot resize image /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.480 185484 DEBUG nova.objects.instance [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lazy-loading 'migration_context' on Instance uuid 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.526 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.604 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.604 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Ensure instance console log exists: /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.605 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.606 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:30:23 compute-0 nova_compute[185480]: 2026-01-27 19:30:23.606 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:30:23 compute-0 sshd-session[250543]: Connection closed by authenticating user root 156.227.233.86 port 59858 [preauth]
Jan 27 19:30:24 compute-0 sshd-session[250557]: Connection closed by authenticating user root 156.227.233.86 port 48422 [preauth]
Jan 27 19:30:26 compute-0 sshd-session[250559]: Connection closed by authenticating user root 156.227.233.86 port 36884 [preauth]
Jan 27 19:30:27 compute-0 sshd-session[250562]: Connection closed by authenticating user root 156.227.233.86 port 49814 [preauth]
Jan 27 19:30:28 compute-0 nova_compute[185480]: 2026-01-27 19:30:28.040 185484 DEBUG nova.policy [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd57ebe90e53d40899a8b3f3ce873df18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b8690906d754ad4b5878d33231c97f9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 19:30:28 compute-0 sshd-session[250564]: Connection closed by authenticating user root 156.227.233.86 port 33068 [preauth]
Jan 27 19:30:28 compute-0 nova_compute[185480]: 2026-01-27 19:30:28.352 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:28 compute-0 nova_compute[185480]: 2026-01-27 19:30:28.529 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:29 compute-0 sshd-session[250566]: Connection closed by authenticating user root 156.227.233.86 port 44508 [preauth]
Jan 27 19:30:29 compute-0 podman[250568]: 2026-01-27 19:30:29.382400619 +0000 UTC m=+0.136176599 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 19:30:29 compute-0 podman[201378]: time="2026-01-27T19:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:30:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:30:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Jan 27 19:30:30 compute-0 sshd-session[250578]: Connection closed by authenticating user root 156.227.233.86 port 55860 [preauth]
Jan 27 19:30:31 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:30:31.003 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:30:31 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:30:31.004 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:30:31 compute-0 nova_compute[185480]: 2026-01-27 19:30:31.011 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:31 compute-0 sshd-session[250589]: Connection closed by authenticating user root 156.227.233.86 port 42494 [preauth]
Jan 27 19:30:31 compute-0 openstack_network_exporter[204477]: ERROR   19:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:30:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:30:31 compute-0 openstack_network_exporter[204477]: ERROR   19:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:30:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.104 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.105 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.115 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75da31edb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:30:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:32.117 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:32 compute-0 sshd-session[250591]: Connection closed by authenticating user root 156.227.233.86 port 56220 [preauth]
Jan 27 19:30:32 compute-0 podman[250596]: 2026-01-27 19:30:32.335191483 +0000 UTC m=+0.088119720 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 19:30:32 compute-0 podman[250594]: 2026-01-27 19:30:32.337278355 +0000 UTC m=+0.105816868 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:30:32 compute-0 podman[250595]: 2026-01-27 19:30:32.399584776 +0000 UTC m=+0.157579418 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 27 19:30:33 compute-0 sshd-session[250649]: Connection closed by authenticating user root 156.227.233.86 port 41678 [preauth]
Jan 27 19:30:33 compute-0 nova_compute[185480]: 2026-01-27 19:30:33.355 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:33 compute-0 nova_compute[185480]: 2026-01-27 19:30:33.532 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.947 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:32 GMT Server: Apache x-compute-request-id: req-77fc8c43-e41b-457a-9475-0bccc24484c8 x-openstack-request-id: req-77fc8c43-e41b-457a-9475-0bccc24484c8 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.947 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-77fc8c43-e41b-457a-9475-0bccc24484c8 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-77fc8c43-e41b-457a-9475-0bccc24484c8): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-77fc8c43-e41b-457a-9475-0bccc24484c8)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-77fc8c43-e41b-457a-9475-0bccc24484c8)
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.948 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.959 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:33.960 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.118 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:33 GMT Server: Apache x-compute-request-id: req-5246bbd7-aefd-42dd-b01b-d3fecd1874ef x-openstack-request-id: req-5246bbd7-aefd-42dd-b01b-d3fecd1874ef _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.118 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.118 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-5246bbd7-aefd-42dd-b01b-d3fecd1874ef request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-5246bbd7-aefd-42dd-b01b-d3fecd1874ef): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-5246bbd7-aefd-42dd-b01b-d3fecd1874ef)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-5246bbd7-aefd-42dd-b01b-d3fecd1874ef)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.119 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.127 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.129 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:34 compute-0 sshd-session[250659]: Connection closed by authenticating user root 156.227.233.86 port 54392 [preauth]
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:34 GMT Server: Apache x-compute-request-id: req-dadf05dd-132a-4ef3-babc-f30b1aa5f4dd x-openstack-request-id: req-dadf05dd-132a-4ef3-babc-f30b1aa5f4dd _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-dadf05dd-132a-4ef3-babc-f30b1aa5f4dd request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-dadf05dd-132a-4ef3-babc-f30b1aa5f4dd): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-dadf05dd-132a-4ef3-babc-f30b1aa5f4dd)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-dadf05dd-132a-4ef3-babc-f30b1aa5f4dd)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.221 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.223 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.231 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.232 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.379 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:34 GMT Server: Apache x-compute-request-id: req-882b1dca-b465-40cc-bceb-a9da8554ed35 x-openstack-request-id: req-882b1dca-b465-40cc-bceb-a9da8554ed35 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.379 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.379 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-882b1dca-b465-40cc-bceb-a9da8554ed35 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-882b1dca-b465-40cc-bceb-a9da8554ed35): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-882b1dca-b465-40cc-bceb-a9da8554ed35)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-882b1dca-b465-40cc-bceb-a9da8554ed35)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.380 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.382 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.388 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.389 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.453 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:34 GMT Server: Apache x-compute-request-id: req-b2b5ef8c-5ce5-4cfd-98fa-0be1a08a2dfd x-openstack-request-id: req-b2b5ef8c-5ce5-4cfd-98fa-0be1a08a2dfd _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.453 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-b2b5ef8c-5ce5-4cfd-98fa-0be1a08a2dfd request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-b2b5ef8c-5ce5-4cfd-98fa-0be1a08a2dfd): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-b2b5ef8c-5ce5-4cfd-98fa-0be1a08a2dfd)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-b2b5ef8c-5ce5-4cfd-98fa-0be1a08a2dfd)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.454 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.457 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.463 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.465 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.564 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:34 GMT Server: Apache x-compute-request-id: req-582609b7-9fa7-48ab-ba31-0a5b03759a6d x-openstack-request-id: req-582609b7-9fa7-48ab-ba31-0a5b03759a6d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.565 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.565 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-582609b7-9fa7-48ab-ba31-0a5b03759a6d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-582609b7-9fa7-48ab-ba31-0a5b03759a6d): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-582609b7-9fa7-48ab-ba31-0a5b03759a6d)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-582609b7-9fa7-48ab-ba31-0a5b03759a6d)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.566 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.569 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.575 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.577 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.714 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:34 GMT Server: Apache x-compute-request-id: req-61d04688-2c5e-46d1-b92d-151c8e0878b0 x-openstack-request-id: req-61d04688-2c5e-46d1-b92d-151c8e0878b0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-61d04688-2c5e-46d1-b92d-151c8e0878b0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-61d04688-2c5e-46d1-b92d-151c8e0878b0): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-61d04688-2c5e-46d1-b92d-151c8e0878b0)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-61d04688-2c5e-46d1-b92d-151c8e0878b0)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.715 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.723 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.724 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.814 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:34 GMT Server: Apache x-compute-request-id: req-a1bd7451-441b-46a2-ac54-7453b811d1c6 x-openstack-request-id: req-a1bd7451-441b-46a2-ac54-7453b811d1c6 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-a1bd7451-441b-46a2-ac54-7453b811d1c6 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-a1bd7451-441b-46a2-ac54-7453b811d1c6): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-a1bd7451-441b-46a2-ac54-7453b811d1c6)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-a1bd7451-441b-46a2-ac54-7453b811d1c6)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.815 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.816 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.816 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.820 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.821 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.927 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:34 GMT Server: Apache x-compute-request-id: req-d281919e-5efe-4154-a477-e31f44f5d804 x-openstack-request-id: req-d281919e-5efe-4154-a477-e31f44f5d804 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.927 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-d281919e-5efe-4154-a477-e31f44f5d804 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-d281919e-5efe-4154-a477-e31f44f5d804): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-d281919e-5efe-4154-a477-e31f44f5d804)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-d281919e-5efe-4154-a477-e31f44f5d804)
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.928 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.931 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.937 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:34.938 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.011 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:34 GMT Server: Apache x-compute-request-id: req-b60cf86f-a6fb-4935-afc1-4b1e3f437d23 x-openstack-request-id: req-b60cf86f-a6fb-4935-afc1-4b1e3f437d23 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-b60cf86f-a6fb-4935-afc1-4b1e3f437d23 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-b60cf86f-a6fb-4935-afc1-4b1e3f437d23): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-b60cf86f-a6fb-4935-afc1-4b1e3f437d23)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-b60cf86f-a6fb-4935-afc1-4b1e3f437d23)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.012 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.023 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.025 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.135 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:35 GMT Server: Apache x-compute-request-id: req-d2a33d87-461a-4afe-b124-d60ca3ae38d5 x-openstack-request-id: req-d2a33d87-461a-4afe-b124-d60ca3ae38d5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.135 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-d2a33d87-461a-4afe-b124-d60ca3ae38d5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-d2a33d87-461a-4afe-b124-d60ca3ae38d5): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-d2a33d87-461a-4afe-b124-d60ca3ae38d5)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-d2a33d87-461a-4afe-b124-d60ca3ae38d5)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.136 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.138 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.143 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.145 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:35 compute-0 sshd-session[250661]: Connection closed by authenticating user root 156.227.233.86 port 40170 [preauth]
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.243 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:35 GMT Server: Apache x-compute-request-id: req-3888f48b-5e70-4840-b442-63e0b15eddac x-openstack-request-id: req-3888f48b-5e70-4840-b442-63e0b15eddac _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-3888f48b-5e70-4840-b442-63e0b15eddac request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-3888f48b-5e70-4840-b442-63e0b15eddac): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-3888f48b-5e70-4840-b442-63e0b15eddac)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-3888f48b-5e70-4840-b442-63e0b15eddac)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.244 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.248 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.248 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.253 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.255 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.349 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:35 GMT Server: Apache x-compute-request-id: req-a422651f-102a-43fd-a5aa-9f8782fb0183 x-openstack-request-id: req-a422651f-102a-43fd-a5aa-9f8782fb0183 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.349 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.349 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-a422651f-102a-43fd-a5aa-9f8782fb0183 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-a422651f-102a-43fd-a5aa-9f8782fb0183): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-a422651f-102a-43fd-a5aa-9f8782fb0183)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-a422651f-102a-43fd-a5aa-9f8782fb0183)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.352 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.363 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.365 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.774 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:35 GMT Server: Apache x-compute-request-id: req-d81e5720-4086-4563-bf34-c4b56d58eecc x-openstack-request-id: req-d81e5720-4086-4563-bf34-c4b56d58eecc _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.774 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-d81e5720-4086-4563-bf34-c4b56d58eecc request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-d81e5720-4086-4563-bf34-c4b56d58eecc): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-d81e5720-4086-4563-bf34-c4b56d58eecc)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-d81e5720-4086-4563-bf34-c4b56d58eecc)
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.775 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.781 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:35 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:35.783 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:36 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:30:36.006 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:30:36 compute-0 sshd-session[250663]: Connection closed by authenticating user root 156.227.233.86 port 53340 [preauth]
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.846 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:36 GMT Server: Apache x-compute-request-id: req-c5bb7f4a-d7f5-4b11-a7db-b93d06a082da x-openstack-request-id: req-c5bb7f4a-d7f5-4b11-a7db-b93d06a082da _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-c5bb7f4a-d7f5-4b11-a7db-b93d06a082da request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-c5bb7f4a-d7f5-4b11-a7db-b93d06a082da): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-c5bb7f4a-d7f5-4b11-a7db-b93d06a082da)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-c5bb7f4a-d7f5-4b11-a7db-b93d06a082da)
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.847 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.852 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:36 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:36.853 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:37 compute-0 sshd-session[250665]: Connection closed by authenticating user root 156.227.233.86 port 38954 [preauth]
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.831 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:37 GMT Server: Apache x-compute-request-id: req-d80c19e4-d30e-48f4-9ac9-3609012cd46e x-openstack-request-id: req-d80c19e4-d30e-48f4-9ac9-3609012cd46e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-d80c19e4-d30e-48f4-9ac9-3609012cd46e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-d80c19e4-d30e-48f4-9ac9-3609012cd46e): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-d80c19e4-d30e-48f4-9ac9-3609012cd46e)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-d80c19e4-d30e-48f4-9ac9-3609012cd46e)
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.832 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.834 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.840 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:37 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:37.841 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.033 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:37 GMT Server: Apache x-compute-request-id: req-da253a04-c6e0-4011-9efa-d8d6d290c355 x-openstack-request-id: req-da253a04-c6e0-4011-9efa-d8d6d290c355 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-da253a04-c6e0-4011-9efa-d8d6d290c355 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-da253a04-c6e0-4011-9efa-d8d6d290c355): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-da253a04-c6e0-4011-9efa-d8d6d290c355)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-da253a04-c6e0-4011-9efa-d8d6d290c355)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.034 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.037 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.042 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.044 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.151 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-eb0e1e27-f63e-4bea-b71b-734f1ce94792 x-openstack-request-id: req-eb0e1e27-f63e-4bea-b71b-734f1ce94792 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.151 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-eb0e1e27-f63e-4bea-b71b-734f1ce94792 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-eb0e1e27-f63e-4bea-b71b-734f1ce94792): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-eb0e1e27-f63e-4bea-b71b-734f1ce94792)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-eb0e1e27-f63e-4bea-b71b-734f1ce94792)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.152 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.155 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.161 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.162 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.276 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-24106912-51bb-4666-8bfe-1eb19f44a7d6 x-openstack-request-id: req-24106912-51bb-4666-8bfe-1eb19f44a7d6 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.276 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.276 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-24106912-51bb-4666-8bfe-1eb19f44a7d6 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-24106912-51bb-4666-8bfe-1eb19f44a7d6): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-24106912-51bb-4666-8bfe-1eb19f44a7d6)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-24106912-51bb-4666-8bfe-1eb19f44a7d6)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.277 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.287 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.288 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 nova_compute[185480]: 2026-01-27 19:30:38.358 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.399 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-03c8a36e-f6ed-4425-9e70-3ec6e82f4db9 x-openstack-request-id: req-03c8a36e-f6ed-4425-9e70-3ec6e82f4db9 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-03c8a36e-f6ed-4425-9e70-3ec6e82f4db9 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-03c8a36e-f6ed-4425-9e70-3ec6e82f4db9): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-03c8a36e-f6ed-4425-9e70-3ec6e82f4db9)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-03c8a36e-f6ed-4425-9e70-3ec6e82f4db9)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.400 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.404 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.408 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.409 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.504 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-cd6151e5-5c76-4c83-8d08-43684c523d5d x-openstack-request-id: req-cd6151e5-5c76-4c83-8d08-43684c523d5d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.504 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.504 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-cd6151e5-5c76-4c83-8d08-43684c523d5d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-cd6151e5-5c76-4c83-8d08-43684c523d5d): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-cd6151e5-5c76-4c83-8d08-43684c523d5d)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-cd6151e5-5c76-4c83-8d08-43684c523d5d)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.505 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.511 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.512 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 nova_compute[185480]: 2026-01-27 19:30:38.536 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.579 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-53780791-3f95-4e4c-bb5c-33387ab9079c x-openstack-request-id: req-53780791-3f95-4e4c-bb5c-33387ab9079c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.579 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-53780791-3f95-4e4c-bb5c-33387ab9079c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-53780791-3f95-4e4c-bb5c-33387ab9079c): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-53780791-3f95-4e4c-bb5c-33387ab9079c)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-53780791-3f95-4e4c-bb5c-33387ab9079c)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.580 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.582 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.587 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.588 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.644 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-39953dd9-dd93-4a53-896e-8cd5c1d3673c x-openstack-request-id: req-39953dd9-dd93-4a53-896e-8cd5c1d3673c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.644 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.644 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-39953dd9-dd93-4a53-896e-8cd5c1d3673c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-39953dd9-dd93-4a53-896e-8cd5c1d3673c): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-39953dd9-dd93-4a53-896e-8cd5c1d3673c)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-39953dd9-dd93-4a53-896e-8cd5c1d3673c)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.645 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.652 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.652 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 sshd-session[250668]: Connection closed by authenticating user root 156.227.233.86 port 50078 [preauth]
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-39d52da2-d802-44af-8308-982765e3dc53 x-openstack-request-id: req-39d52da2-d802-44af-8308-982765e3dc53 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-39d52da2-d802-44af-8308-982765e3dc53 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-39d52da2-d802-44af-8308-982765e3dc53): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-39d52da2-d802-44af-8308-982765e3dc53)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-39d52da2-d802-44af-8308-982765e3dc53)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.707 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.712 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.713 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.893 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-c04dda0d-a104-462a-8024-d02cb3ea3381 x-openstack-request-id: req-c04dda0d-a104-462a-8024-d02cb3ea3381 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-c04dda0d-a104-462a-8024-d02cb3ea3381 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-c04dda0d-a104-462a-8024-d02cb3ea3381): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-c04dda0d-a104-462a-8024-d02cb3ea3381)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-c04dda0d-a104-462a-8024-d02cb3ea3381)
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.894 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.896 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.902 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:30:38 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:38.903 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.092 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Tue, 27 Jan 2026 19:30:38 GMT Server: Apache x-compute-request-id: req-ee361578-d234-4802-8c9c-fb25d767d3c1 x-openstack-request-id: req-ee361578-d234-4802-8c9c-fb25d767d3c1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-ee361578-d234-4802-8c9c-fb25d767d3c1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-ee361578-d234-4802-8c9c-fb25d767d3c1): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: The Keystone service is temporarily unavailable.
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]:  (HTTP 503) (Request-ID: req-ee361578-d234-4802-8c9c-fb25d767d3c1)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-ee361578-d234-4802-8c9c-fb25d767d3c1)
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.093 14 ERROR ceilometer.polling.manager 
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:30:39.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:30:39 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:30:39 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db     raise result
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:30:39 compute-0 nova_compute[185480]: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db 
Jan 27 19:30:39 compute-0 sshd-session[250670]: Connection closed by authenticating user root 156.227.233.86 port 41300 [preauth]
Jan 27 19:30:39 compute-0 rsyslogd[235877]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:30:39 compute-0 rsyslogd[235877]: message too long (9052) with configured size 8096, begin of message is: 2026-01-27 19:30:39.593 185484 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:30:40 compute-0 sshd-session[250672]: Connection closed by authenticating user root 156.227.233.86 port 56766 [preauth]
Jan 27 19:30:41 compute-0 sshd-session[250674]: Connection closed by authenticating user root 156.227.233.86 port 44112 [preauth]
Jan 27 19:30:42 compute-0 sshd-session[250676]: Connection closed by authenticating user root 156.227.233.86 port 57144 [preauth]
Jan 27 19:30:43 compute-0 nova_compute[185480]: 2026-01-27 19:30:43.361 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:43 compute-0 nova_compute[185480]: 2026-01-27 19:30:43.538 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:43 compute-0 sshd-session[250678]: Connection closed by authenticating user root 156.227.233.86 port 43246 [preauth]
Jan 27 19:30:44 compute-0 sshd-session[250680]: Connection closed by authenticating user root 156.227.233.86 port 57540 [preauth]
Jan 27 19:30:44 compute-0 podman[250682]: 2026-01-27 19:30:44.801248514 +0000 UTC m=+0.086042950 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:30:44 compute-0 podman[250683]: 2026-01-27 19:30:44.827426251 +0000 UTC m=+0.103207833 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:30:44 compute-0 podman[250684]: 2026-01-27 19:30:44.83145413 +0000 UTC m=+0.102747531 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, config_id=kepler, container_name=kepler)
Jan 27 19:30:45 compute-0 sshd-session[250742]: Connection closed by authenticating user root 156.227.233.86 port 44380 [preauth]
Jan 27 19:30:46 compute-0 sshd-session[250744]: Connection closed by authenticating user root 156.227.233.86 port 58054 [preauth]
Jan 27 19:30:47 compute-0 sshd-session[250746]: Connection closed by authenticating user root 156.227.233.86 port 44348 [preauth]
Jan 27 19:30:48 compute-0 nova_compute[185480]: 2026-01-27 19:30:48.362 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:48 compute-0 nova_compute[185480]: 2026-01-27 19:30:48.543 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:48 compute-0 sshd-session[250748]: Connection closed by authenticating user root 156.227.233.86 port 58458 [preauth]
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:30:49 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:30:49 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db     raise result
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:30:49 compute-0 nova_compute[185480]: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db 
Jan 27 19:30:49 compute-0 rsyslogd[235877]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:30:49 compute-0 rsyslogd[235877]: message too long (9052) with configured size 8096, begin of message is: 2026-01-27 19:30:49.412 185484 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:30:49 compute-0 sshd-session[250750]: Connection closed by authenticating user root 156.227.233.86 port 42338 [preauth]
Jan 27 19:30:50 compute-0 sshd-session[250753]: Connection closed by authenticating user root 156.227.233.86 port 57976 [preauth]
Jan 27 19:30:51 compute-0 sshd-session[250755]: Connection closed by authenticating user root 156.227.233.86 port 44422 [preauth]
Jan 27 19:30:53 compute-0 podman[250759]: 2026-01-27 19:30:53.338452388 +0000 UTC m=+0.114094003 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350)
Jan 27 19:30:53 compute-0 nova_compute[185480]: 2026-01-27 19:30:53.365 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:53 compute-0 sshd-session[250757]: Connection closed by authenticating user root 156.227.233.86 port 59910 [preauth]
Jan 27 19:30:53 compute-0 nova_compute[185480]: 2026-01-27 19:30:53.546 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:30:54 compute-0 sshd-session[250781]: Connection closed by authenticating user root 156.227.233.86 port 52510 [preauth]
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:30:54 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:30:54 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task     task(self, context)
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9891, in _heal_instance_info_cache
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task     inst = objects.Instance.get_by_uuid(
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task     raise result
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:30:54 compute-0 nova_compute[185480]: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task 
Jan 27 19:30:54 compute-0 rsyslogd[235877]: message too long (8833) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:30:54 compute-0 rsyslogd[235877]: message too long (8897) with configured size 8096, begin of message is: 2026-01-27 19:30:54.562 185484 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:30:55 compute-0 sshd-session[250783]: Connection closed by authenticating user root 156.227.233.86 port 38950 [preauth]
Jan 27 19:30:56 compute-0 sshd-session[250785]: Invalid user user from 156.227.233.86 port 53634
Jan 27 19:30:56 compute-0 sshd-session[250785]: Connection closed by invalid user user 156.227.233.86 port 53634 [preauth]
Jan 27 19:30:57 compute-0 sshd-session[250787]: Invalid user user from 156.227.233.86 port 40164
Jan 27 19:30:57 compute-0 sshd-session[250787]: Connection closed by invalid user user 156.227.233.86 port 40164 [preauth]
Jan 27 19:30:58 compute-0 nova_compute[185480]: 2026-01-27 19:30:58.369 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:58 compute-0 sshd-session[250789]: Invalid user user from 156.227.233.86 port 56390
Jan 27 19:30:58 compute-0 nova_compute[185480]: 2026-01-27 19:30:58.549 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:30:58 compute-0 sshd-session[250789]: Connection closed by invalid user user 156.227.233.86 port 56390 [preauth]
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:30:59 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:30:59 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db     raise result
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:30:59 compute-0 nova_compute[185480]: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db 
Jan 27 19:30:59 compute-0 sshd-session[250791]: Invalid user user from 156.227.233.86 port 43576
Jan 27 19:30:59 compute-0 podman[250793]: 2026-01-27 19:30:59.678730374 +0000 UTC m=+0.112696368 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 19:30:59 compute-0 sshd-session[250791]: Connection closed by invalid user user 156.227.233.86 port 43576 [preauth]
Jan 27 19:30:59 compute-0 podman[201378]: time="2026-01-27T19:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:30:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:30:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 27 19:30:59 compute-0 rsyslogd[235877]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:30:59 compute-0 rsyslogd[235877]: message too long (9052) with configured size 8096, begin of message is: 2026-01-27 19:30:59.510 185484 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 19:31:00 compute-0 sshd-session[250813]: Invalid user user from 156.227.233.86 port 60314
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Error during ComputeManager._run_pending_deletes: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:00 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:00 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1357, in get_by_filters\n    db_inst_list = cls._get_by_filters_impl(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1347, in _get_by_filters_impl\n    db_inst_list = db.instance_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1583, in instance_get_all_by_filters\n    return instance_get_all_by_filters_sort(context, filters, limit=limit,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1842, in instance_get_all_by_filters_sort\n    instances = query_prefix.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task     task(self, context)
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11152, in _run_pending_deletes
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task     instances = objects.InstanceList.get_by_filters(
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task     raise result
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1357, in get_by_filters\n    db_inst_list = cls._get_by_filters_impl(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1347, in _get_by_filters_impl\n    db_inst_list = db.instance_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1583, in instance_get_all_by_filters\n    return instance_get_all_by_filters_sort(context, filters, limit=limit,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1842, in instance_get_all_by_filters_sort\n    instances = query_prefix.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:00 compute-0 nova_compute[185480]: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task 
Jan 27 19:31:00 compute-0 sshd-session[250813]: Connection closed by invalid user user 156.227.233.86 port 60314 [preauth]
Jan 27 19:31:00 compute-0 rsyslogd[235877]: message too long (9083) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:00 compute-0 rsyslogd[235877]: message too long (9147) with configured size 8096, begin of message is: 2026-01-27 19:31:00.597 185484 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:01 compute-0 openstack_network_exporter[204477]: ERROR   19:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:31:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:31:01 compute-0 openstack_network_exporter[204477]: ERROR   19:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:31:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:31:01 compute-0 sshd-session[250815]: Invalid user user from 156.227.233.86 port 45730
Jan 27 19:31:01 compute-0 nova_compute[185480]: 2026-01-27 19:31:01.597 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:01 compute-0 sshd-session[250815]: Connection closed by invalid user user 156.227.233.86 port 45730 [preauth]
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:02 compute-0 sshd-session[250817]: Invalid user user from 156.227.233.86 port 56558
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Error during ComputeManager._cleanup_expired_console_auth_tokens: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:02 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:02 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task     task(self, context)
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11282, in _cleanup_expired_console_auth_tokens
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task     objects.ConsoleAuthToken.clean_expired_console_auths(context)
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task     raise result
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:02 compute-0 nova_compute[185480]: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task 
Jan 27 19:31:02 compute-0 ovn_controller[97647]: 2026-01-27T19:31:02Z|00116|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 27 19:31:02 compute-0 podman[250819]: 2026-01-27 19:31:02.75743736 +0000 UTC m=+0.100313201 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:31:02 compute-0 podman[250820]: 2026-01-27 19:31:02.800815583 +0000 UTC m=+0.136703411 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 19:31:02 compute-0 podman[250821]: 2026-01-27 19:31:02.801340177 +0000 UTC m=+0.132433437 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 19:31:02 compute-0 rsyslogd[235877]: message too long (8183) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:02 compute-0 rsyslogd[235877]: message too long (8247) with configured size 8096, begin of message is: 2026-01-27 19:31:02.738 185484 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:02 compute-0 sshd-session[250817]: Connection closed by invalid user user 156.227.233.86 port 56558 [preauth]
Jan 27 19:31:03 compute-0 nova_compute[185480]: 2026-01-27 19:31:03.370 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:03 compute-0 nova_compute[185480]: 2026-01-27 19:31:03.552 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:03 compute-0 sshd-session[250884]: Invalid user user from 156.227.233.86 port 40020
Jan 27 19:31:03 compute-0 sshd-session[250884]: Connection closed by invalid user user 156.227.233.86 port 40020 [preauth]
Jan 27 19:31:04 compute-0 sshd-session[250886]: Invalid user user from 156.227.233.86 port 51364
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.734 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.735 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:04 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:04 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     task(self, context)
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task     raise result
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:04 compute-0 nova_compute[185480]: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task 
Jan 27 19:31:04 compute-0 sshd-session[250886]: Connection closed by invalid user user 156.227.233.86 port 51364 [preauth]
Jan 27 19:31:04 compute-0 rsyslogd[235877]: message too long (8132) with configured size 8096, begin of message is: 2026-01-27 19:31:04.782 185484 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:05 compute-0 sshd-session[250889]: Invalid user user from 156.227.233.86 port 36200
Jan 27 19:31:05 compute-0 sshd-session[250889]: Connection closed by invalid user user 156.227.233.86 port 36200 [preauth]
Jan 27 19:31:06 compute-0 nova_compute[185480]: 2026-01-27 19:31:06.563 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:06 compute-0 nova_compute[185480]: 2026-01-27 19:31:06.564 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:31:06 compute-0 sshd-session[250891]: Invalid user user from 156.227.233.86 port 50608
Jan 27 19:31:06 compute-0 sshd-session[250891]: Connection closed by invalid user user 156.227.233.86 port 50608 [preauth]
Jan 27 19:31:07 compute-0 sshd-session[250893]: Invalid user user from 156.227.233.86 port 35414
Jan 27 19:31:07 compute-0 sshd-session[250893]: Connection closed by invalid user user 156.227.233.86 port 35414 [preauth]
Jan 27 19:31:08 compute-0 nova_compute[185480]: 2026-01-27 19:31:08.373 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:08 compute-0 nova_compute[185480]: 2026-01-27 19:31:08.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:08 compute-0 nova_compute[185480]: 2026-01-27 19:31:08.555 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:08 compute-0 sshd-session[250895]: Invalid user user from 156.227.233.86 port 48914
Jan 27 19:31:08 compute-0 sshd-session[250895]: Connection closed by invalid user user 156.227.233.86 port 48914 [preauth]
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:09 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:09 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db     raise result
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db 
Jan 27 19:31:09 compute-0 nova_compute[185480]: 2026-01-27 19:31:09.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:09 compute-0 rsyslogd[235877]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:09 compute-0 rsyslogd[235877]: message too long (9052) with configured size 8096, begin of message is: 2026-01-27 19:31:09.383 185484 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:09 compute-0 sshd-session[250897]: Invalid user user from 156.227.233.86 port 33236
Jan 27 19:31:09 compute-0 sshd-session[250897]: Connection closed by invalid user user 156.227.233.86 port 33236 [preauth]
Jan 27 19:31:10 compute-0 sshd-session[250899]: Invalid user user from 156.227.233.86 port 47844
Jan 27 19:31:11 compute-0 sshd-session[250899]: Connection closed by invalid user user 156.227.233.86 port 47844 [preauth]
Jan 27 19:31:11 compute-0 sshd-session[250901]: Invalid user user from 156.227.233.86 port 34126
Jan 27 19:31:12 compute-0 sshd-session[250901]: Connection closed by invalid user user 156.227.233.86 port 34126 [preauth]
Jan 27 19:31:12 compute-0 nova_compute[185480]: 2026-01-27 19:31:12.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:12 compute-0 sshd-session[250903]: Invalid user user from 156.227.233.86 port 47822
Jan 27 19:31:13 compute-0 sshd-session[250903]: Connection closed by invalid user user 156.227.233.86 port 47822 [preauth]
Jan 27 19:31:13 compute-0 nova_compute[185480]: 2026-01-27 19:31:13.377 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:13 compute-0 nova_compute[185480]: 2026-01-27 19:31:13.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:13 compute-0 nova_compute[185480]: 2026-01-27 19:31:13.558 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:13 compute-0 sshd-session[250905]: Invalid user user from 156.227.233.86 port 35046
Jan 27 19:31:14 compute-0 sshd-session[250905]: Connection closed by invalid user user 156.227.233.86 port 35046 [preauth]
Jan 27 19:31:14 compute-0 sshd-session[250907]: Invalid user user from 156.227.233.86 port 49972
Jan 27 19:31:15 compute-0 podman[250910]: 2026-01-27 19:31:15.094257803 +0000 UTC m=+0.133945063 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 27 19:31:15 compute-0 sshd-session[250907]: Connection closed by invalid user user 156.227.233.86 port 49972 [preauth]
Jan 27 19:31:15 compute-0 podman[250909]: 2026-01-27 19:31:15.111853799 +0000 UTC m=+0.154346088 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:31:15 compute-0 podman[250911]: 2026-01-27 19:31:15.117722404 +0000 UTC m=+0.146498775 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, config_id=kepler, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, io.buildah.version=1.29.0, distribution-scope=public)
Jan 27 19:31:15 compute-0 sshd-session[250971]: Invalid user user from 156.227.233.86 port 35184
Jan 27 19:31:16 compute-0 sshd-session[250971]: Connection closed by invalid user user 156.227.233.86 port 35184 [preauth]
Jan 27 19:31:16 compute-0 sshd-session[250973]: Invalid user user from 156.227.233.86 port 50902
Jan 27 19:31:17 compute-0 sshd-session[250973]: Connection closed by invalid user user 156.227.233.86 port 50902 [preauth]
Jan 27 19:31:18 compute-0 sshd-session[250975]: Invalid user user from 156.227.233.86 port 37966
Jan 27 19:31:18 compute-0 sshd-session[250975]: Connection closed by invalid user user 156.227.233.86 port 37966 [preauth]
Jan 27 19:31:18 compute-0 nova_compute[185480]: 2026-01-27 19:31:18.380 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:18 compute-0 nova_compute[185480]: 2026-01-27 19:31:18.561 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:19 compute-0 sshd-session[250977]: Invalid user user from 156.227.233.86 port 50512
Jan 27 19:31:19 compute-0 sshd-session[250977]: Connection closed by invalid user user 156.227.233.86 port 50512 [preauth]
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:19 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:19 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db     raise result
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db 
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.515 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Error during ComputeManager._cleanup_incomplete_migrations: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:19 compute-0 nova_compute[185480]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:19 compute-0 nova_compute[185480]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/migration.py", line 266, in get_by_filters\n    db_migrations = db.migration_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3457, in migration_get_all_by_filters\n    return query.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task     task(self, context)
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11186, in _cleanup_incomplete_migrations
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task     migrations = objects.MigrationList.get_by_filters(context,
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task     raise result
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/migration.py", line 266, in get_by_filters\n    db_migrations = db.migration_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3457, in migration_get_all_by_filters\n    return query.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 19:31:19 compute-0 nova_compute[185480]: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task 
Jan 27 19:31:19 compute-0 rsyslogd[235877]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:19 compute-0 rsyslogd[235877]: message too long (9052) with configured size 8096, begin of message is: 2026-01-27 19:31:19.443 185484 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:19 compute-0 rsyslogd[235877]: message too long (8248) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:19 compute-0 rsyslogd[235877]: message too long (8312) with configured size 8096, begin of message is: 2026-01-27 19:31:19.569 185484 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 19:31:20 compute-0 sshd-session[250979]: Invalid user user from 156.227.233.86 port 33958
Jan 27 19:31:20 compute-0 sshd-session[250979]: Connection closed by invalid user user 156.227.233.86 port 33958 [preauth]
Jan 27 19:31:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:20.543 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:31:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:20.544 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:31:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:20.544 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:31:21 compute-0 sshd-session[250982]: Invalid user user from 156.227.233.86 port 49218
Jan 27 19:31:21 compute-0 sshd-session[250982]: Connection closed by invalid user user 156.227.233.86 port 49218 [preauth]
Jan 27 19:31:22 compute-0 sshd-session[250984]: Invalid user user from 156.227.233.86 port 36818
Jan 27 19:31:22 compute-0 sshd-session[250984]: Connection closed by invalid user user 156.227.233.86 port 36818 [preauth]
Jan 27 19:31:22 compute-0 nova_compute[185480]: 2026-01-27 19:31:22.704 185484 DEBUG nova.network.neutron [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Successfully created port: 2976aaab-c73e-4d12-88b9-4a36da5c35e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 19:31:23 compute-0 sshd-session[250986]: Invalid user user from 156.227.233.86 port 50396
Jan 27 19:31:23 compute-0 sshd-session[250986]: Connection closed by invalid user user 156.227.233.86 port 50396 [preauth]
Jan 27 19:31:23 compute-0 nova_compute[185480]: 2026-01-27 19:31:23.384 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:23 compute-0 nova_compute[185480]: 2026-01-27 19:31:23.564 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:24 compute-0 sshd-session[250988]: Invalid user user from 156.227.233.86 port 36674
Jan 27 19:31:24 compute-0 podman[250990]: 2026-01-27 19:31:24.355430901 +0000 UTC m=+0.120244816 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9)
Jan 27 19:31:24 compute-0 sshd-session[250988]: Connection closed by invalid user user 156.227.233.86 port 36674 [preauth]
Jan 27 19:31:24 compute-0 nova_compute[185480]: 2026-01-27 19:31:24.550 185484 DEBUG nova.network.neutron [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Successfully updated port: 2976aaab-c73e-4d12-88b9-4a36da5c35e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 19:31:24 compute-0 nova_compute[185480]: 2026-01-27 19:31:24.733 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Acquiring lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:31:24 compute-0 nova_compute[185480]: 2026-01-27 19:31:24.735 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Acquired lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:31:24 compute-0 nova_compute[185480]: 2026-01-27 19:31:24.736 185484 DEBUG nova.network.neutron [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 19:31:24 compute-0 nova_compute[185480]: 2026-01-27 19:31:24.830 185484 DEBUG nova.compute.manager [req-8f831fda-5678-4eec-b8b7-2c817de4a9b9 req-8310100e-d88b-4bb5-805f-7270c1b63374 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Received event network-changed-2976aaab-c73e-4d12-88b9-4a36da5c35e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:31:24 compute-0 nova_compute[185480]: 2026-01-27 19:31:24.832 185484 DEBUG nova.compute.manager [req-8f831fda-5678-4eec-b8b7-2c817de4a9b9 req-8310100e-d88b-4bb5-805f-7270c1b63374 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Refreshing instance network info cache due to event network-changed-2976aaab-c73e-4d12-88b9-4a36da5c35e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 19:31:24 compute-0 nova_compute[185480]: 2026-01-27 19:31:24.833 185484 DEBUG oslo_concurrency.lockutils [req-8f831fda-5678-4eec-b8b7-2c817de4a9b9 req-8310100e-d88b-4bb5-805f-7270c1b63374 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:31:24 compute-0 nova_compute[185480]: 2026-01-27 19:31:24.962 185484 DEBUG nova.network.neutron [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 19:31:25 compute-0 sshd-session[251010]: Invalid user user from 156.227.233.86 port 48758
Jan 27 19:31:25 compute-0 sshd-session[251010]: Connection closed by invalid user user 156.227.233.86 port 48758 [preauth]
Jan 27 19:31:26 compute-0 sshd-session[251012]: Invalid user user from 156.227.233.86 port 34104
Jan 27 19:31:26 compute-0 sshd-session[251012]: Connection closed by invalid user user 156.227.233.86 port 34104 [preauth]
Jan 27 19:31:27 compute-0 sshd-session[251014]: Invalid user user from 156.227.233.86 port 47144
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.501 185484 DEBUG nova.network.neutron [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Updating instance_info_cache with network_info: [{"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:31:27 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:27.524 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3e:76:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '5a:db:95:e4:24:00'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:31:27 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:27.525 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 19:31:27 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:27.526 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d3b19c13-a2f4-422f-8fa1-01ce64dc0c58, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.527 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:27 compute-0 sshd-session[251014]: Connection closed by invalid user user 156.227.233.86 port 47144 [preauth]
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.560 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Releasing lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.560 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Instance network_info: |[{"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.561 185484 DEBUG oslo_concurrency.lockutils [req-8f831fda-5678-4eec-b8b7-2c817de4a9b9 req-8310100e-d88b-4bb5-805f-7270c1b63374 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquired lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.561 185484 DEBUG nova.network.neutron [req-8f831fda-5678-4eec-b8b7-2c817de4a9b9 req-8310100e-d88b-4bb5-805f-7270c1b63374 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Refreshing network info cache for port 2976aaab-c73e-4d12-88b9-4a36da5c35e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.564 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Start _get_guest_xml network_info=[{"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'image_id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.575 185484 WARNING nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.596 185484 DEBUG nova.virt.libvirt.host [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.598 185484 DEBUG nova.virt.libvirt.host [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.604 185484 DEBUG nova.virt.libvirt.host [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.605 185484 DEBUG nova.virt.libvirt.host [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.606 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.607 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T19:26:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='49f81b8c-e0df-4a53-87c6-69576be59651',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T19:26:43Z,direct_url=<?>,disk_format='qcow2',id=729797c6-2677-44bd-a4a8-949d1f57b0a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f04ec1493db14ca1adbb4b6abd1667b1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T19:26:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.608 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.608 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.609 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.609 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.610 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.610 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.610 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.611 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.611 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.612 185484 DEBUG nova.virt.hardware [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.616 185484 DEBUG nova.virt.libvirt.vif [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:30:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-252932369',display_name='tempest-TestServerBasicOps-server-252932369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-252932369',id=11,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC6aL+CzoQXIBYvoRzyBXGS7SatIleelJyt3kZ1RLum0jV6DdMHIM0E6iq9x5pyJnvnY8l7xrZUA0Ib8dh01So/kETndNWmYqmNWvbEQWm72VX7XuwpsuIHVjzal298RRA==',key_name='tempest-TestServerBasicOps-975359150',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b8690906d754ad4b5878d33231c97f9',ramdisk_id='',reservation_id='r-6fq8qkqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1755042286',owner_user_name='tempest-TestServerBasicOps-1755042286-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:30:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d57ebe90e53d40899a8b3f3ce873df18',uuid=14a5dad2-3e22-42a5-bd6e-7255c6b09d8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.617 185484 DEBUG nova.network.os_vif_util [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Converting VIF {"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.618 185484 DEBUG nova.network.os_vif_util [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:85:f5,bridge_name='br-int',has_traffic_filtering=True,id=2976aaab-c73e-4d12-88b9-4a36da5c35e0,network=Network(4edcfe4e-277a-432a-a139-0a06cca1f6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2976aaab-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.619 185484 DEBUG nova.objects.instance [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.641 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <uuid>14a5dad2-3e22-42a5-bd6e-7255c6b09d8f</uuid>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <name>instance-0000000b</name>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <memory>131072</memory>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <vcpu>1</vcpu>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <metadata>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <nova:name>tempest-TestServerBasicOps-server-252932369</nova:name>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <nova:creationTime>2026-01-27 19:31:27</nova:creationTime>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <nova:flavor name="m1.nano">
Jan 27 19:31:27 compute-0 nova_compute[185480]:         <nova:memory>128</nova:memory>
Jan 27 19:31:27 compute-0 nova_compute[185480]:         <nova:disk>1</nova:disk>
Jan 27 19:31:27 compute-0 nova_compute[185480]:         <nova:swap>0</nova:swap>
Jan 27 19:31:27 compute-0 nova_compute[185480]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 19:31:27 compute-0 nova_compute[185480]:         <nova:vcpus>1</nova:vcpus>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       </nova:flavor>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <nova:owner>
Jan 27 19:31:27 compute-0 nova_compute[185480]:         <nova:user uuid="d57ebe90e53d40899a8b3f3ce873df18">tempest-TestServerBasicOps-1755042286-project-member</nova:user>
Jan 27 19:31:27 compute-0 nova_compute[185480]:         <nova:project uuid="2b8690906d754ad4b5878d33231c97f9">tempest-TestServerBasicOps-1755042286</nova:project>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       </nova:owner>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <nova:root type="image" uuid="729797c6-2677-44bd-a4a8-949d1f57b0a2"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <nova:ports>
Jan 27 19:31:27 compute-0 nova_compute[185480]:         <nova:port uuid="2976aaab-c73e-4d12-88b9-4a36da5c35e0">
Jan 27 19:31:27 compute-0 nova_compute[185480]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:         </nova:port>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       </nova:ports>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </nova:instance>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   </metadata>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <sysinfo type="smbios">
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <system>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <entry name="manufacturer">RDO</entry>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <entry name="product">OpenStack Compute</entry>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <entry name="serial">14a5dad2-3e22-42a5-bd6e-7255c6b09d8f</entry>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <entry name="uuid">14a5dad2-3e22-42a5-bd6e-7255c6b09d8f</entry>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <entry name="family">Virtual Machine</entry>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </system>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   </sysinfo>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <os>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <boot dev="hd"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <smbios mode="sysinfo"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   </os>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <features>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <acpi/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <apic/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <vmcoreinfo/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   </features>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <clock offset="utc">
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <timer name="hpet" present="no"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   </clock>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <cpu mode="host-model" match="exact">
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   </cpu>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   <devices>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <disk type="file" device="disk">
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <target dev="vda" bus="virtio"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <disk type="file" device="cdrom">
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <source file="/var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.config"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <target dev="sda" bus="sata"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </disk>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <interface type="ethernet">
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <mac address="fa:16:3e:0c:85:f5"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <mtu size="1442"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <target dev="tap2976aaab-c7"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </interface>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <serial type="pty">
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <log file="/var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/console.log" append="off"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </serial>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <video>
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <model type="virtio"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </video>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <input type="tablet" bus="usb"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <rng model="virtio">
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <backend model="random">/dev/urandom</backend>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </rng>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <controller type="usb" index="0"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     <memballoon model="virtio">
Jan 27 19:31:27 compute-0 nova_compute[185480]:       <stats period="10"/>
Jan 27 19:31:27 compute-0 nova_compute[185480]:     </memballoon>
Jan 27 19:31:27 compute-0 nova_compute[185480]:   </devices>
Jan 27 19:31:27 compute-0 nova_compute[185480]: </domain>
Jan 27 19:31:27 compute-0 nova_compute[185480]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.643 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Preparing to wait for external event network-vif-plugged-2976aaab-c73e-4d12-88b9-4a36da5c35e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.644 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Acquiring lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.645 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.646 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.647 185484 DEBUG nova.virt.libvirt.vif [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T19:30:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-252932369',display_name='tempest-TestServerBasicOps-server-252932369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-252932369',id=11,image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC6aL+CzoQXIBYvoRzyBXGS7SatIleelJyt3kZ1RLum0jV6DdMHIM0E6iq9x5pyJnvnY8l7xrZUA0Ib8dh01So/kETndNWmYqmNWvbEQWm72VX7XuwpsuIHVjzal298RRA==',key_name='tempest-TestServerBasicOps-975359150',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b8690906d754ad4b5878d33231c97f9',ramdisk_id='',reservation_id='r-6fq8qkqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='729797c6-2677-44bd-a4a8-949d1f57b0a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1755042286',owner_user_name='tempest-TestServerBasicOps-1755042286-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T19:30:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d57ebe90e53d40899a8b3f3ce873df18',uuid=14a5dad2-3e22-42a5-bd6e-7255c6b09d8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.648 185484 DEBUG nova.network.os_vif_util [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Converting VIF {"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.650 185484 DEBUG nova.network.os_vif_util [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:85:f5,bridge_name='br-int',has_traffic_filtering=True,id=2976aaab-c73e-4d12-88b9-4a36da5c35e0,network=Network(4edcfe4e-277a-432a-a139-0a06cca1f6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2976aaab-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.651 185484 DEBUG os_vif [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:85:f5,bridge_name='br-int',has_traffic_filtering=True,id=2976aaab-c73e-4d12-88b9-4a36da5c35e0,network=Network(4edcfe4e-277a-432a-a139-0a06cca1f6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2976aaab-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.652 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.653 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.654 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.661 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.661 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2976aaab-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.663 185484 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2976aaab-c7, col_values=(('external_ids', {'iface-id': '2976aaab-c73e-4d12-88b9-4a36da5c35e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:85:f5', 'vm-uuid': '14a5dad2-3e22-42a5-bd6e-7255c6b09d8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.668 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:27 compute-0 NetworkManager[56191]: <info>  [1769542287.6709] manager: (tap2976aaab-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.671 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.686 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.688 185484 INFO os_vif [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:85:f5,bridge_name='br-int',has_traffic_filtering=True,id=2976aaab-c73e-4d12-88b9-4a36da5c35e0,network=Network(4edcfe4e-277a-432a-a139-0a06cca1f6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2976aaab-c7')
Jan 27 19:31:27 compute-0 ovn_controller[97647]: 2026-01-27T19:31:27Z|00117|binding|INFO|Releasing lport 48c695f0-2fed-4bb1-9b53-847c4dc25e7f from this chassis (sb_readonly=0)
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.840 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.849 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.850 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.850 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] No VIF found with MAC fa:16:3e:0c:85:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 19:31:27 compute-0 nova_compute[185480]: 2026-01-27 19:31:27.851 185484 INFO nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Using config drive
Jan 27 19:31:28 compute-0 nova_compute[185480]: 2026-01-27 19:31:28.386 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:28 compute-0 sshd-session[251018]: Invalid user user from 156.227.233.86 port 33526
Jan 27 19:31:28 compute-0 nova_compute[185480]: 2026-01-27 19:31:28.549 185484 INFO nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Creating config drive at /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.config
Jan 27 19:31:28 compute-0 nova_compute[185480]: 2026-01-27 19:31:28.562 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zkwl45k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:31:28 compute-0 sshd-session[251018]: Connection closed by invalid user user 156.227.233.86 port 33526 [preauth]
Jan 27 19:31:28 compute-0 nova_compute[185480]: 2026-01-27 19:31:28.716 185484 DEBUG oslo_concurrency.processutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zkwl45k" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:31:28 compute-0 kernel: tap2976aaab-c7: entered promiscuous mode
Jan 27 19:31:28 compute-0 NetworkManager[56191]: <info>  [1769542288.8366] manager: (tap2976aaab-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Jan 27 19:31:28 compute-0 nova_compute[185480]: 2026-01-27 19:31:28.840 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:28 compute-0 ovn_controller[97647]: 2026-01-27T19:31:28Z|00118|binding|INFO|Claiming lport 2976aaab-c73e-4d12-88b9-4a36da5c35e0 for this chassis.
Jan 27 19:31:28 compute-0 ovn_controller[97647]: 2026-01-27T19:31:28Z|00119|binding|INFO|2976aaab-c73e-4d12-88b9-4a36da5c35e0: Claiming fa:16:3e:0c:85:f5 10.100.0.10
Jan 27 19:31:28 compute-0 ovn_controller[97647]: 2026-01-27T19:31:28Z|00120|binding|INFO|Setting lport 2976aaab-c73e-4d12-88b9-4a36da5c35e0 ovn-installed in OVS
Jan 27 19:31:28 compute-0 nova_compute[185480]: 2026-01-27 19:31:28.863 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:28 compute-0 nova_compute[185480]: 2026-01-27 19:31:28.867 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:28 compute-0 systemd-udevd[251039]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 19:31:28 compute-0 systemd-machined[156762]: New machine qemu-11-instance-0000000b.
Jan 27 19:31:28 compute-0 NetworkManager[56191]: <info>  [1769542288.9209] device (tap2976aaab-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 19:31:28 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Jan 27 19:31:28 compute-0 NetworkManager[56191]: <info>  [1769542288.9216] device (tap2976aaab-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.009 106898 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:85:f5 10.100.0.10'], port_security=['fa:16:3e:0c:85:f5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '14a5dad2-3e22-42a5-bd6e-7255c6b09d8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4edcfe4e-277a-432a-a139-0a06cca1f6d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b8690906d754ad4b5878d33231c97f9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '68f74f40-3859-405c-bbe6-353d40483e1f e0245feb-da33-4f9a-ad01-910ae87146ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec17f8db-5365-442d-8645-04ff2bf79388, chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7157a97df0>], logical_port=2976aaab-c73e-4d12-88b9-4a36da5c35e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.010 106898 INFO neutron.agent.ovn.metadata.agent [-] Port 2976aaab-c73e-4d12-88b9-4a36da5c35e0 in datapath 4edcfe4e-277a-432a-a139-0a06cca1f6d2 bound to our chassis
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.012 106898 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4edcfe4e-277a-432a-a139-0a06cca1f6d2
Jan 27 19:31:29 compute-0 ovn_controller[97647]: 2026-01-27T19:31:29Z|00121|binding|INFO|Setting lport 2976aaab-c73e-4d12-88b9-4a36da5c35e0 up in Southbound
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.033 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[a15753d1-4b75-4f41-9119-e727b530d7e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.035 106898 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4edcfe4e-21 in ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.039 238834 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4edcfe4e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.040 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[93da68c0-bf08-4783-b0c3-f435a244cfe7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.042 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[80e62d20-73bc-4bde-8291-8fb4a72f36d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.067 107353 DEBUG oslo.privsep.daemon [-] privsep: reply[10e10670-c273-4ab3-b24f-cfe9399c11b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.091 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba195f6-795f-4ee0-bc55-4b5014b46a90]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.135 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[dce8a58b-6d24-4510-b637-d74cf0d7935a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 NetworkManager[56191]: <info>  [1769542289.1475] manager: (tap4edcfe4e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.146 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cdb04b-f1f1-43e1-8dfa-9878d00251de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.205 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc33f49-5b34-42d5-a194-f001be1ccf0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.210 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[094cf017-42fe-4d94-8363-dc6a66d55bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 NetworkManager[56191]: <info>  [1769542289.2540] device (tap4edcfe4e-20): carrier: link connected
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.267 238851 DEBUG oslo.privsep.daemon [-] privsep: reply[754f78a6-a661-458a-9e11-6ebebf5b4fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.297 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[597f1289-b923-49f2-a453-169ae3cf384c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4edcfe4e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:f4:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540582, 'reachable_time': 27940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251073, 'error': None, 'target': 'ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.325 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[73704888-dbe7-4673-8980-888fb657159b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:f4b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540582, 'tstamp': 540582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251074, 'error': None, 'target': 'ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.359 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[73633acc-b4a4-4151-a219-141ffaf4b948]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4edcfe4e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:f4:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540582, 'reachable_time': 27940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251075, 'error': None, 'target': 'ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 sshd-session[251023]: Invalid user user from 156.227.233.86 port 47390
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.420 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[2d997888-9c1a-41dd-b6d4-5fb00fe1d46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.503 185484 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.546 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[425bc959-60c2-4659-b39d-448ec3230531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.549 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4edcfe4e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.552 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.553 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4edcfe4e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.556 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:29 compute-0 NetworkManager[56191]: <info>  [1769542289.5573] manager: (tap4edcfe4e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 27 19:31:29 compute-0 kernel: tap4edcfe4e-20: entered promiscuous mode
Jan 27 19:31:29 compute-0 sshd-session[251023]: Connection closed by invalid user user 156.227.233.86 port 47390 [preauth]
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.573 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.576 106898 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4edcfe4e-20, col_values=(('external_ids', {'iface-id': '454b8599-cdf3-4c6e-8721-8fbc90ddaa79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.577 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:29 compute-0 ovn_controller[97647]: 2026-01-27T19:31:29Z|00122|binding|INFO|Releasing lport 454b8599-cdf3-4c6e-8721-8fbc90ddaa79 from this chassis (sb_readonly=0)
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.607 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.609 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.611 106898 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4edcfe4e-277a-432a-a139-0a06cca1f6d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4edcfe4e-277a-432a-a139-0a06cca1f6d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.613 238834 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca880e9-a4da-4f1f-ac07-b98d7b6f25f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.614 106898 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: global
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     log         /dev/log local0 debug
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     log-tag     haproxy-metadata-proxy-4edcfe4e-277a-432a-a139-0a06cca1f6d2
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     user        root
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     group       root
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     maxconn     1024
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     pidfile     /var/lib/neutron/external/pids/4edcfe4e-277a-432a-a139-0a06cca1f6d2.pid.haproxy
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     daemon
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: defaults
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     log global
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     mode http
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     option httplog
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     option dontlognull
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     option http-server-close
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     option forwardfor
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     retries                 3
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     timeout http-request    30s
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     timeout connect         30s
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     timeout client          32s
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     timeout server          32s
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     timeout http-keep-alive 30s
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: listen listener
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     bind 169.254.169.254:80
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:     http-request add-header X-OVN-Network-ID 4edcfe4e-277a-432a-a139-0a06cca1f6d2
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 19:31:29 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:31:29.614 106898 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2', 'env', 'PROCESS_TAG=haproxy-4edcfe4e-277a-432a-a139-0a06cca1f6d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4edcfe4e-277a-432a-a139-0a06cca1f6d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 19:31:29 compute-0 podman[201378]: time="2026-01-27T19:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:31:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.760 185484 DEBUG nova.compute.manager [req-847060a7-37ed-401f-9725-aa368f02ef66 req-abbd9254-86d3-49a1-b734-6adccb745ed8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Received event network-vif-plugged-2976aaab-c73e-4d12-88b9-4a36da5c35e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.762 185484 DEBUG oslo_concurrency.lockutils [req-847060a7-37ed-401f-9725-aa368f02ef66 req-abbd9254-86d3-49a1-b734-6adccb745ed8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.763 185484 DEBUG oslo_concurrency.lockutils [req-847060a7-37ed-401f-9725-aa368f02ef66 req-abbd9254-86d3-49a1-b734-6adccb745ed8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.763 185484 DEBUG oslo_concurrency.lockutils [req-847060a7-37ed-401f-9725-aa368f02ef66 req-abbd9254-86d3-49a1-b734-6adccb745ed8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:31:29 compute-0 nova_compute[185480]: 2026-01-27 19:31:29.763 185484 DEBUG nova.compute.manager [req-847060a7-37ed-401f-9725-aa368f02ef66 req-abbd9254-86d3-49a1-b734-6adccb745ed8 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Processing event network-vif-plugged-2976aaab-c73e-4d12-88b9-4a36da5c35e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 19:31:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.033 185484 DEBUG nova.network.neutron [req-8f831fda-5678-4eec-b8b7-2c817de4a9b9 req-8310100e-d88b-4bb5-805f-7270c1b63374 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Updated VIF entry in instance network info cache for port 2976aaab-c73e-4d12-88b9-4a36da5c35e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.034 185484 DEBUG nova.network.neutron [req-8f831fda-5678-4eec-b8b7-2c817de4a9b9 req-8310100e-d88b-4bb5-805f-7270c1b63374 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Updating instance_info_cache with network_info: [{"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.067 185484 DEBUG oslo_concurrency.lockutils [req-8f831fda-5678-4eec-b8b7-2c817de4a9b9 req-8310100e-d88b-4bb5-805f-7270c1b63374 bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Releasing lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:31:30 compute-0 podman[251109]: 2026-01-27 19:31:30.123223939 +0000 UTC m=+0.096604091 container create fa76ad7d53196f7a93d9e26e73c76a959553c7330c60d4e90d6fc33e91a507ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:31:30 compute-0 podman[251109]: 2026-01-27 19:31:30.072942355 +0000 UTC m=+0.046322567 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 19:31:30 compute-0 systemd[1]: Started libpod-conmon-fa76ad7d53196f7a93d9e26e73c76a959553c7330c60d4e90d6fc33e91a507ec.scope.
Jan 27 19:31:30 compute-0 systemd[1]: Started libcrun container.
Jan 27 19:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a33c34f3c36672de294a1cdd1f993fa0487948068a502b768eceffe44d97e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 19:31:30 compute-0 podman[251109]: 2026-01-27 19:31:30.242349295 +0000 UTC m=+0.215729507 container init fa76ad7d53196f7a93d9e26e73c76a959553c7330c60d4e90d6fc33e91a507ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:31:30 compute-0 podman[251109]: 2026-01-27 19:31:30.252785873 +0000 UTC m=+0.226166025 container start fa76ad7d53196f7a93d9e26e73c76a959553c7330c60d4e90d6fc33e91a507ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 19:31:30 compute-0 neutron-haproxy-ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2[251130]: [NOTICE]   (251151) : New worker (251155) forked
Jan 27 19:31:30 compute-0 neutron-haproxy-ovnmeta-4edcfe4e-277a-432a-a139-0a06cca1f6d2[251130]: [NOTICE]   (251151) : Loading success.
Jan 27 19:31:30 compute-0 podman[251122]: 2026-01-27 19:31:30.303462137 +0000 UTC m=+0.136418995 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.350 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542290.3500302, 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.351 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] VM Started (Lifecycle Event)
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.354 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.362 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.368 185484 INFO nova.virt.libvirt.driver [-] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Instance spawned successfully.
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.369 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.384 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.396 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.399 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.400 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.400 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.401 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.401 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.402 185484 DEBUG nova.virt.libvirt.driver [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 19:31:30 compute-0 sshd-session[251085]: Invalid user user from 156.227.233.86 port 60436
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.507 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.507 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542290.3501291, 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.508 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] VM Paused (Lifecycle Event)
Jan 27 19:31:30 compute-0 sshd-session[251085]: Connection closed by invalid user user 156.227.233.86 port 60436 [preauth]
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.657 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.662 185484 DEBUG nova.virt.driver [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] Emitting event <LifecycleEvent: 1769542290.3601823, 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.662 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] VM Resumed (Lifecycle Event)
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.695 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.701 185484 DEBUG nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.714 185484 INFO nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Took 67.73 seconds to spawn the instance on the hypervisor.
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.714 185484 DEBUG nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.728 185484 INFO nova.compute.manager [None req-e6e803a4-8edc-43c7-be59-6f8d4cf568da - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.799 185484 INFO nova.compute.manager [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Took 74.19 seconds to build instance.
Jan 27 19:31:30 compute-0 nova_compute[185480]: 2026-01-27 19:31:30.822 185484 DEBUG oslo_concurrency.lockutils [None req-30db7cb9-3b3f-4464-bc03-0ce9caa75261 d57ebe90e53d40899a8b3f3ce873df18 2b8690906d754ad4b5878d33231c97f9 - - default default] Lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 74.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:31:31 compute-0 openstack_network_exporter[204477]: ERROR   19:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:31:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:31:31 compute-0 openstack_network_exporter[204477]: ERROR   19:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:31:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:31:31 compute-0 sshd-session[251164]: Invalid user user from 156.227.233.86 port 46518
Jan 27 19:31:31 compute-0 sshd-session[251164]: Connection closed by invalid user user 156.227.233.86 port 46518 [preauth]
Jan 27 19:31:32 compute-0 nova_compute[185480]: 2026-01-27 19:31:32.015 185484 DEBUG nova.compute.manager [req-18883a07-ec73-4399-b992-d34fb3647f19 req-39cfef0c-9288-4d7b-ad11-698793c03c5b bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Received event network-vif-plugged-2976aaab-c73e-4d12-88b9-4a36da5c35e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 19:31:32 compute-0 nova_compute[185480]: 2026-01-27 19:31:32.016 185484 DEBUG oslo_concurrency.lockutils [req-18883a07-ec73-4399-b992-d34fb3647f19 req-39cfef0c-9288-4d7b-ad11-698793c03c5b bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Acquiring lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:31:32 compute-0 nova_compute[185480]: 2026-01-27 19:31:32.016 185484 DEBUG oslo_concurrency.lockutils [req-18883a07-ec73-4399-b992-d34fb3647f19 req-39cfef0c-9288-4d7b-ad11-698793c03c5b bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:31:32 compute-0 nova_compute[185480]: 2026-01-27 19:31:32.016 185484 DEBUG oslo_concurrency.lockutils [req-18883a07-ec73-4399-b992-d34fb3647f19 req-39cfef0c-9288-4d7b-ad11-698793c03c5b bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] Lock "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:31:32 compute-0 nova_compute[185480]: 2026-01-27 19:31:32.017 185484 DEBUG nova.compute.manager [req-18883a07-ec73-4399-b992-d34fb3647f19 req-39cfef0c-9288-4d7b-ad11-698793c03c5b bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] No waiting events found dispatching network-vif-plugged-2976aaab-c73e-4d12-88b9-4a36da5c35e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 19:31:32 compute-0 nova_compute[185480]: 2026-01-27 19:31:32.017 185484 WARNING nova.compute.manager [req-18883a07-ec73-4399-b992-d34fb3647f19 req-39cfef0c-9288-4d7b-ad11-698793c03c5b bf6a6b35f6df40598becd61ef6652916 016a6efe8db640698e538e33fa601458 - - default default] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Received unexpected event network-vif-plugged-2976aaab-c73e-4d12-88b9-4a36da5c35e0 for instance with vm_state active and task_state None.
Jan 27 19:31:32 compute-0 sshd-session[251166]: Invalid user user from 156.227.233.86 port 59286
Jan 27 19:31:32 compute-0 sshd-session[251166]: Connection closed by invalid user user 156.227.233.86 port 59286 [preauth]
Jan 27 19:31:32 compute-0 nova_compute[185480]: 2026-01-27 19:31:32.669 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:33 compute-0 podman[251172]: 2026-01-27 19:31:33.341231801 +0000 UTC m=+0.101775508 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 19:31:33 compute-0 podman[251170]: 2026-01-27 19:31:33.347847574 +0000 UTC m=+0.119070186 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:31:33 compute-0 podman[251171]: 2026-01-27 19:31:33.371478829 +0000 UTC m=+0.139584503 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 27 19:31:33 compute-0 nova_compute[185480]: 2026-01-27 19:31:33.388 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:34 compute-0 sshd-session[251168]: Invalid user user from 156.227.233.86 port 46008
Jan 27 19:31:34 compute-0 sshd-session[251168]: Connection closed by invalid user user 156.227.233.86 port 46008 [preauth]
Jan 27 19:31:35 compute-0 sshd-session[251235]: Invalid user user from 156.227.233.86 port 37032
Jan 27 19:31:35 compute-0 sshd-session[251235]: Connection closed by invalid user user 156.227.233.86 port 37032 [preauth]
Jan 27 19:31:36 compute-0 sshd-session[251237]: Invalid user user from 156.227.233.86 port 50296
Jan 27 19:31:36 compute-0 sshd-session[251237]: Connection closed by invalid user user 156.227.233.86 port 50296 [preauth]
Jan 27 19:31:37 compute-0 sshd-session[251239]: Invalid user user from 156.227.233.86 port 35694
Jan 27 19:31:37 compute-0 sshd-session[251239]: Connection closed by invalid user user 156.227.233.86 port 35694 [preauth]
Jan 27 19:31:37 compute-0 nova_compute[185480]: 2026-01-27 19:31:37.674 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:38 compute-0 sshd-session[251242]: Invalid user user from 156.227.233.86 port 48870
Jan 27 19:31:38 compute-0 sshd-session[251242]: Connection closed by invalid user user 156.227.233.86 port 48870 [preauth]
Jan 27 19:31:38 compute-0 nova_compute[185480]: 2026-01-27 19:31:38.392 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:39 compute-0 sshd-session[251244]: Invalid user user from 156.227.233.86 port 35572
Jan 27 19:31:39 compute-0 sshd-session[251244]: Connection closed by invalid user user 156.227.233.86 port 35572 [preauth]
Jan 27 19:31:40 compute-0 sshd-session[251246]: Invalid user user from 156.227.233.86 port 48232
Jan 27 19:31:40 compute-0 sshd-session[251246]: Connection closed by invalid user user 156.227.233.86 port 48232 [preauth]
Jan 27 19:31:41 compute-0 sshd-session[251248]: Invalid user user from 156.227.233.86 port 38028
Jan 27 19:31:41 compute-0 sshd-session[251248]: Connection closed by invalid user user 156.227.233.86 port 38028 [preauth]
Jan 27 19:31:42 compute-0 nova_compute[185480]: 2026-01-27 19:31:42.679 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:43 compute-0 nova_compute[185480]: 2026-01-27 19:31:43.397 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:43 compute-0 sshd-session[251250]: Invalid user user from 156.227.233.86 port 55820
Jan 27 19:31:43 compute-0 sshd-session[251250]: Connection closed by invalid user user 156.227.233.86 port 55820 [preauth]
Jan 27 19:31:44 compute-0 sshd-session[251252]: Invalid user user from 156.227.233.86 port 60714
Jan 27 19:31:44 compute-0 sshd-session[251252]: Connection closed by invalid user user 156.227.233.86 port 60714 [preauth]
Jan 27 19:31:45 compute-0 podman[251258]: 2026-01-27 19:31:45.361484204 +0000 UTC m=+0.109564892 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, vcs-type=git)
Jan 27 19:31:45 compute-0 podman[251256]: 2026-01-27 19:31:45.363154975 +0000 UTC m=+0.124988802 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 19:31:45 compute-0 podman[251257]: 2026-01-27 19:31:45.380012702 +0000 UTC m=+0.134613171 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 19:31:45 compute-0 sshd-session[251254]: Invalid user user from 156.227.233.86 port 47590
Jan 27 19:31:45 compute-0 sshd-session[251254]: Connection closed by invalid user user 156.227.233.86 port 47590 [preauth]
Jan 27 19:31:46 compute-0 sshd-session[251315]: Invalid user user from 156.227.233.86 port 35182
Jan 27 19:31:46 compute-0 sshd-session[251315]: Connection closed by invalid user user 156.227.233.86 port 35182 [preauth]
Jan 27 19:31:47 compute-0 sshd-session[251317]: Invalid user user from 156.227.233.86 port 50878
Jan 27 19:31:47 compute-0 nova_compute[185480]: 2026-01-27 19:31:47.682 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:47 compute-0 sshd-session[251317]: Connection closed by invalid user user 156.227.233.86 port 50878 [preauth]
Jan 27 19:31:48 compute-0 nova_compute[185480]: 2026-01-27 19:31:48.411 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:49 compute-0 sshd-session[251319]: Invalid user user from 156.227.233.86 port 37528
Jan 27 19:31:49 compute-0 sshd-session[251319]: Connection closed by invalid user user 156.227.233.86 port 37528 [preauth]
Jan 27 19:31:50 compute-0 sshd-session[251321]: Invalid user user from 156.227.233.86 port 59370
Jan 27 19:31:50 compute-0 sshd-session[251321]: Connection closed by invalid user user 156.227.233.86 port 59370 [preauth]
Jan 27 19:31:51 compute-0 sshd-session[251324]: Invalid user user from 156.227.233.86 port 48030
Jan 27 19:31:51 compute-0 sshd-session[251324]: Connection closed by invalid user user 156.227.233.86 port 48030 [preauth]
Jan 27 19:31:52 compute-0 sshd-session[251326]: Invalid user user from 156.227.233.86 port 33384
Jan 27 19:31:52 compute-0 sshd-session[251326]: Connection closed by invalid user user 156.227.233.86 port 33384 [preauth]
Jan 27 19:31:52 compute-0 nova_compute[185480]: 2026-01-27 19:31:52.689 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:53 compute-0 sshd-session[251328]: Invalid user user from 156.227.233.86 port 48064
Jan 27 19:31:53 compute-0 sshd-session[251328]: Connection closed by invalid user user 156.227.233.86 port 48064 [preauth]
Jan 27 19:31:53 compute-0 nova_compute[185480]: 2026-01-27 19:31:53.415 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:54 compute-0 sshd-session[251330]: Invalid user user from 156.227.233.86 port 33234
Jan 27 19:31:54 compute-0 sshd-session[251330]: Connection closed by invalid user user 156.227.233.86 port 33234 [preauth]
Jan 27 19:31:55 compute-0 sshd-session[251332]: Invalid user user from 156.227.233.86 port 46860
Jan 27 19:31:55 compute-0 podman[251334]: 2026-01-27 19:31:55.341851582 +0000 UTC m=+0.108776421 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, architecture=x86_64, release=1755695350, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-type=git)
Jan 27 19:31:55 compute-0 sshd-session[251332]: Connection closed by invalid user user 156.227.233.86 port 46860 [preauth]
Jan 27 19:31:55 compute-0 nova_compute[185480]: 2026-01-27 19:31:55.572 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:31:55 compute-0 nova_compute[185480]: 2026-01-27 19:31:55.572 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:31:55 compute-0 nova_compute[185480]: 2026-01-27 19:31:55.573 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:31:56 compute-0 sshd-session[251354]: Invalid user user from 156.227.233.86 port 60752
Jan 27 19:31:56 compute-0 nova_compute[185480]: 2026-01-27 19:31:56.277 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:31:56 compute-0 nova_compute[185480]: 2026-01-27 19:31:56.279 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:31:56 compute-0 nova_compute[185480]: 2026-01-27 19:31:56.280 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:31:56 compute-0 nova_compute[185480]: 2026-01-27 19:31:56.281 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid f79ddcc5-ee21-43e8-9d0d-60476a477361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:31:56 compute-0 sshd-session[251354]: Connection closed by invalid user user 156.227.233.86 port 60752 [preauth]
Jan 27 19:31:57 compute-0 sshd-session[251356]: Invalid user user from 156.227.233.86 port 47030
Jan 27 19:31:57 compute-0 sshd-session[251356]: Connection closed by invalid user user 156.227.233.86 port 47030 [preauth]
Jan 27 19:31:57 compute-0 nova_compute[185480]: 2026-01-27 19:31:57.693 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:58 compute-0 sshd-session[251358]: Invalid user user from 156.227.233.86 port 32854
Jan 27 19:31:58 compute-0 nova_compute[185480]: 2026-01-27 19:31:58.422 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:58 compute-0 sshd-session[251358]: Connection closed by invalid user user 156.227.233.86 port 32854 [preauth]
Jan 27 19:31:58 compute-0 nova_compute[185480]: 2026-01-27 19:31:58.772 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:31:59 compute-0 sshd-session[251360]: Invalid user user from 156.227.233.86 port 46626
Jan 27 19:31:59 compute-0 sshd-session[251360]: Connection closed by invalid user user 156.227.233.86 port 46626 [preauth]
Jan 27 19:31:59 compute-0 nova_compute[185480]: 2026-01-27 19:31:59.697 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updating instance_info_cache with network_info: [{"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:31:59 compute-0 nova_compute[185480]: 2026-01-27 19:31:59.721 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:31:59 compute-0 nova_compute[185480]: 2026-01-27 19:31:59.722 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:31:59 compute-0 podman[201378]: time="2026-01-27T19:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:31:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 19:31:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4841 "" "Go-http-client/1.1"
Jan 27 19:32:00 compute-0 sshd-session[251362]: Invalid user user from 156.227.233.86 port 60238
Jan 27 19:32:00 compute-0 podman[251364]: 2026-01-27 19:32:00.976177178 +0000 UTC m=+0.117975118 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 19:32:01 compute-0 sshd-session[251362]: Connection closed by invalid user user 156.227.233.86 port 60238 [preauth]
Jan 27 19:32:01 compute-0 openstack_network_exporter[204477]: ERROR   19:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:32:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:32:01 compute-0 openstack_network_exporter[204477]: ERROR   19:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:32:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:32:01 compute-0 sshd-session[251384]: Invalid user user from 156.227.233.86 port 46910
Jan 27 19:32:02 compute-0 sshd-session[251384]: Connection closed by invalid user user 156.227.233.86 port 46910 [preauth]
Jan 27 19:32:02 compute-0 nova_compute[185480]: 2026-01-27 19:32:02.696 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:02 compute-0 sshd-session[251386]: Invalid user user from 156.227.233.86 port 54330
Jan 27 19:32:03 compute-0 sshd-session[251386]: Connection closed by invalid user user 156.227.233.86 port 54330 [preauth]
Jan 27 19:32:03 compute-0 nova_compute[185480]: 2026-01-27 19:32:03.300 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:03 compute-0 nova_compute[185480]: 2026-01-27 19:32:03.422 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:03 compute-0 nova_compute[185480]: 2026-01-27 19:32:03.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:03 compute-0 sshd-session[251389]: Invalid user user from 156.227.233.86 port 38410
Jan 27 19:32:04 compute-0 podman[251394]: 2026-01-27 19:32:04.1060195 +0000 UTC m=+0.115897658 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:32:04 compute-0 podman[251392]: 2026-01-27 19:32:04.130978887 +0000 UTC m=+0.147547530 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:32:04 compute-0 sshd-session[251389]: Connection closed by invalid user user 156.227.233.86 port 38410 [preauth]
Jan 27 19:32:04 compute-0 podman[251393]: 2026-01-27 19:32:04.172827822 +0000 UTC m=+0.181187262 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 27 19:32:04 compute-0 sshd-session[251459]: Invalid user user from 156.227.233.86 port 53100
Jan 27 19:32:05 compute-0 sshd-session[251459]: Connection closed by invalid user user 156.227.233.86 port 53100 [preauth]
Jan 27 19:32:06 compute-0 sshd-session[251476]: Invalid user user from 156.227.233.86 port 39272
Jan 27 19:32:06 compute-0 sshd-session[251476]: Connection closed by invalid user user 156.227.233.86 port 39272 [preauth]
Jan 27 19:32:06 compute-0 ovn_controller[97647]: 2026-01-27T19:32:06Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:85:f5 10.100.0.10
Jan 27 19:32:06 compute-0 ovn_controller[97647]: 2026-01-27T19:32:06Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:85:f5 10.100.0.10
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.559 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.560 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.560 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.561 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.786 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.879 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.881 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.975 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:32:06 compute-0 nova_compute[185480]: 2026-01-27 19:32:06.988 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:32:07 compute-0 nova_compute[185480]: 2026-01-27 19:32:07.051 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:32:07 compute-0 nova_compute[185480]: 2026-01-27 19:32:07.052 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:32:07 compute-0 sshd-session[251478]: Invalid user user from 156.227.233.86 port 52040
Jan 27 19:32:07 compute-0 nova_compute[185480]: 2026-01-27 19:32:07.132 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:32:07 compute-0 sshd-session[251478]: Connection closed by invalid user user 156.227.233.86 port 52040 [preauth]
Jan 27 19:32:07 compute-0 nova_compute[185480]: 2026-01-27 19:32:07.676 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:32:07 compute-0 nova_compute[185480]: 2026-01-27 19:32:07.678 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4990MB free_disk=72.31942749023438GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:32:07 compute-0 nova_compute[185480]: 2026-01-27 19:32:07.679 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:32:07 compute-0 nova_compute[185480]: 2026-01-27 19:32:07.717 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:32:07 compute-0 nova_compute[185480]: 2026-01-27 19:32:07.718 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.068 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance f79ddcc5-ee21-43e8-9d0d-60476a477361 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.069 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.070 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.071 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:32:08 compute-0 sshd-session[251494]: Invalid user user from 156.227.233.86 port 34770
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.157 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing inventories for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.240 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating ProviderTree inventory for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.240 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Updating inventory in ProviderTree for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.265 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing aggregate associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.296 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Refreshing trait associations for resource provider 8877e97b-aaf6-4210-a385-0f49c1a02906, traits: HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NODE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 19:32:08 compute-0 sshd-session[251494]: Connection closed by invalid user user 156.227.233.86 port 34770 [preauth]
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.388 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.412 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.426 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.432 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:32:08 compute-0 nova_compute[185480]: 2026-01-27 19:32:08.432 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:32:09 compute-0 sshd-session[251496]: Invalid user user from 156.227.233.86 port 49462
Jan 27 19:32:09 compute-0 sshd-session[251496]: Connection closed by invalid user user 156.227.233.86 port 49462 [preauth]
Jan 27 19:32:09 compute-0 nova_compute[185480]: 2026-01-27 19:32:09.434 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:09 compute-0 nova_compute[185480]: 2026-01-27 19:32:09.435 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:32:09 compute-0 nova_compute[185480]: 2026-01-27 19:32:09.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:10 compute-0 sshd-session[251498]: Invalid user user from 156.227.233.86 port 60958
Jan 27 19:32:10 compute-0 sshd-session[251498]: Connection closed by invalid user user 156.227.233.86 port 60958 [preauth]
Jan 27 19:32:10 compute-0 nova_compute[185480]: 2026-01-27 19:32:10.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:10 compute-0 nova_compute[185480]: 2026-01-27 19:32:10.972 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:11 compute-0 ovn_controller[97647]: 2026-01-27T19:32:11Z|00123|binding|INFO|Releasing lport 454b8599-cdf3-4c6e-8721-8fbc90ddaa79 from this chassis (sb_readonly=0)
Jan 27 19:32:11 compute-0 ovn_controller[97647]: 2026-01-27T19:32:11Z|00124|binding|INFO|Releasing lport 48c695f0-2fed-4bb1-9b53-847c4dc25e7f from this chassis (sb_readonly=0)
Jan 27 19:32:11 compute-0 nova_compute[185480]: 2026-01-27 19:32:11.170 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:11 compute-0 sshd-session[251500]: Invalid user user from 156.227.233.86 port 49044
Jan 27 19:32:11 compute-0 sshd-session[251500]: Connection closed by invalid user user 156.227.233.86 port 49044 [preauth]
Jan 27 19:32:11 compute-0 nova_compute[185480]: 2026-01-27 19:32:11.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:12 compute-0 sshd-session[251502]: Invalid user user from 156.227.233.86 port 36156
Jan 27 19:32:12 compute-0 sshd-session[251502]: Connection closed by invalid user user 156.227.233.86 port 36156 [preauth]
Jan 27 19:32:12 compute-0 nova_compute[185480]: 2026-01-27 19:32:12.723 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:13 compute-0 sshd-session[251504]: Invalid user user from 156.227.233.86 port 50710
Jan 27 19:32:13 compute-0 sshd-session[251504]: Connection closed by invalid user user 156.227.233.86 port 50710 [preauth]
Jan 27 19:32:13 compute-0 nova_compute[185480]: 2026-01-27 19:32:13.430 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:13 compute-0 nova_compute[185480]: 2026-01-27 19:32:13.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:14 compute-0 sshd-session[251506]: Invalid user user from 156.227.233.86 port 37794
Jan 27 19:32:14 compute-0 sshd-session[251506]: Connection closed by invalid user user 156.227.233.86 port 37794 [preauth]
Jan 27 19:32:14 compute-0 nova_compute[185480]: 2026-01-27 19:32:14.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:15 compute-0 sshd-session[251508]: Invalid user user from 156.227.233.86 port 51936
Jan 27 19:32:15 compute-0 sshd-session[251508]: Connection closed by invalid user user 156.227.233.86 port 51936 [preauth]
Jan 27 19:32:15 compute-0 nova_compute[185480]: 2026-01-27 19:32:15.977 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:16 compute-0 sshd-session[251510]: Invalid user user from 156.227.233.86 port 40084
Jan 27 19:32:16 compute-0 podman[251512]: 2026-01-27 19:32:16.343012697 +0000 UTC m=+0.100864345 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:32:16 compute-0 podman[251513]: 2026-01-27 19:32:16.366444127 +0000 UTC m=+0.114552664 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 19:32:16 compute-0 podman[251514]: 2026-01-27 19:32:16.381627463 +0000 UTC m=+0.122006890 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, name=ubi9, release-0.7.12=, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30)
Jan 27 19:32:16 compute-0 sshd-session[251510]: Connection closed by invalid user user 156.227.233.86 port 40084 [preauth]
Jan 27 19:32:17 compute-0 sshd-session[251572]: Invalid user user from 156.227.233.86 port 55214
Jan 27 19:32:17 compute-0 sshd-session[251572]: Connection closed by invalid user user 156.227.233.86 port 55214 [preauth]
Jan 27 19:32:17 compute-0 nova_compute[185480]: 2026-01-27 19:32:17.728 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:18 compute-0 sshd-session[251574]: Invalid user user from 156.227.233.86 port 43046
Jan 27 19:32:18 compute-0 nova_compute[185480]: 2026-01-27 19:32:18.434 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:18 compute-0 sshd-session[251574]: Connection closed by invalid user user 156.227.233.86 port 43046 [preauth]
Jan 27 19:32:19 compute-0 sshd-session[251576]: Invalid user user from 156.227.233.86 port 58234
Jan 27 19:32:19 compute-0 sshd-session[251576]: Connection closed by invalid user user 156.227.233.86 port 58234 [preauth]
Jan 27 19:32:20 compute-0 sshd-session[251579]: Invalid user user from 156.227.233.86 port 42226
Jan 27 19:32:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:32:20.545 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:32:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:32:20.546 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:32:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:32:20.547 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:32:20 compute-0 sshd-session[251579]: Connection closed by invalid user user 156.227.233.86 port 42226 [preauth]
Jan 27 19:32:20 compute-0 ovn_controller[97647]: 2026-01-27T19:32:20Z|00125|binding|INFO|Releasing lport 454b8599-cdf3-4c6e-8721-8fbc90ddaa79 from this chassis (sb_readonly=0)
Jan 27 19:32:20 compute-0 ovn_controller[97647]: 2026-01-27T19:32:20Z|00126|binding|INFO|Releasing lport 48c695f0-2fed-4bb1-9b53-847c4dc25e7f from this chassis (sb_readonly=0)
Jan 27 19:32:21 compute-0 nova_compute[185480]: 2026-01-27 19:32:21.066 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:21 compute-0 sshd-session[251581]: Invalid user user from 156.227.233.86 port 56632
Jan 27 19:32:21 compute-0 sshd-session[251581]: Connection closed by invalid user user 156.227.233.86 port 56632 [preauth]
Jan 27 19:32:22 compute-0 sshd-session[251583]: Invalid user user from 156.227.233.86 port 40518
Jan 27 19:32:22 compute-0 sshd-session[251583]: Connection closed by invalid user user 156.227.233.86 port 40518 [preauth]
Jan 27 19:32:22 compute-0 nova_compute[185480]: 2026-01-27 19:32:22.733 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:23 compute-0 nova_compute[185480]: 2026-01-27 19:32:23.437 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:23 compute-0 sshd-session[251585]: Invalid user ubuntu from 156.227.233.86 port 58804
Jan 27 19:32:23 compute-0 sshd-session[251585]: Connection closed by invalid user ubuntu 156.227.233.86 port 58804 [preauth]
Jan 27 19:32:24 compute-0 sshd-session[251587]: Invalid user ubuntu from 156.227.233.86 port 45076
Jan 27 19:32:24 compute-0 sshd-session[251587]: Connection closed by invalid user ubuntu 156.227.233.86 port 45076 [preauth]
Jan 27 19:32:25 compute-0 sshd-session[251589]: Invalid user ubuntu from 156.227.233.86 port 59450
Jan 27 19:32:26 compute-0 sshd-session[251589]: Connection closed by invalid user ubuntu 156.227.233.86 port 59450 [preauth]
Jan 27 19:32:26 compute-0 podman[251591]: 2026-01-27 19:32:26.171109029 +0000 UTC m=+0.154840360 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter)
Jan 27 19:32:27 compute-0 sshd-session[251611]: Invalid user ubuntu from 156.227.233.86 port 52662
Jan 27 19:32:27 compute-0 sshd-session[251611]: Connection closed by invalid user ubuntu 156.227.233.86 port 52662 [preauth]
Jan 27 19:32:27 compute-0 nova_compute[185480]: 2026-01-27 19:32:27.737 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:27 compute-0 sshd-session[251613]: Invalid user sol from 45.148.10.240 port 37086
Jan 27 19:32:28 compute-0 sshd-session[251615]: Invalid user ubuntu from 156.227.233.86 port 41300
Jan 27 19:32:28 compute-0 sshd-session[251613]: Connection closed by invalid user sol 45.148.10.240 port 37086 [preauth]
Jan 27 19:32:28 compute-0 sshd-session[251615]: Connection closed by invalid user ubuntu 156.227.233.86 port 41300 [preauth]
Jan 27 19:32:28 compute-0 nova_compute[185480]: 2026-01-27 19:32:28.440 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:29 compute-0 sshd-session[251617]: Invalid user ubuntu from 156.227.233.86 port 54556
Jan 27 19:32:29 compute-0 sshd-session[251617]: Connection closed by invalid user ubuntu 156.227.233.86 port 54556 [preauth]
Jan 27 19:32:29 compute-0 podman[201378]: time="2026-01-27T19:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:32:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 19:32:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4847 "" "Go-http-client/1.1"
Jan 27 19:32:30 compute-0 sshd-session[251619]: Invalid user ubuntu from 156.227.233.86 port 39124
Jan 27 19:32:30 compute-0 sshd-session[251619]: Connection closed by invalid user ubuntu 156.227.233.86 port 39124 [preauth]
Jan 27 19:32:31 compute-0 sshd-session[251621]: Invalid user ubuntu from 156.227.233.86 port 54594
Jan 27 19:32:31 compute-0 sshd-session[251621]: Connection closed by invalid user ubuntu 156.227.233.86 port 54594 [preauth]
Jan 27 19:32:31 compute-0 podman[251623]: 2026-01-27 19:32:31.299047751 +0000 UTC m=+0.139246966 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 19:32:31 compute-0 openstack_network_exporter[204477]: ERROR   19:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:32:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:32:31 compute-0 openstack_network_exporter[204477]: ERROR   19:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:32:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.105 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.106 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.122 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f79ddcc5-ee21-43e8-9d0d-60476a477361 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.125 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.125 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.126 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.125 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.129 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:32.133 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75d8c1b9e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:32:32 compute-0 sshd-session[251642]: Invalid user ubuntu from 156.227.233.86 port 42746
Jan 27 19:32:32 compute-0 sshd-session[251642]: Connection closed by invalid user ubuntu 156.227.233.86 port 42746 [preauth]
Jan 27 19:32:32 compute-0 nova_compute[185480]: 2026-01-27 19:32:32.742 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:33 compute-0 sshd-session[251645]: Invalid user ubuntu from 156.227.233.86 port 58642
Jan 27 19:32:33 compute-0 sshd-session[251645]: Connection closed by invalid user ubuntu 156.227.233.86 port 58642 [preauth]
Jan 27 19:32:33 compute-0 nova_compute[185480]: 2026-01-27 19:32:33.444 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:33.609 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1981 Content-Type: application/json Date: Tue, 27 Jan 2026 19:32:32 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f67038a2-6c2d-4cc1-a3dd-2fc1a5d716fb x-openstack-request-id: req-f67038a2-6c2d-4cc1-a3dd-2fc1a5d716fb _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:32:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:33.610 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f79ddcc5-ee21-43e8-9d0d-60476a477361", "name": "tempest-ServerActionsTestJSON-server-1826919436", "status": "ACTIVE", "tenant_id": "50b0e23834964280a34973a87d80d1b8", "user_id": "79532101c66342a980a90799ac41a442", "metadata": {}, "hostId": "b1181980f97b1ec08e2bf869b7f7812beda402ece77f78093638d8d3", "image": {"id": "729797c6-2677-44bd-a4a8-949d1f57b0a2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/729797c6-2677-44bd-a4a8-949d1f57b0a2"}]}, "flavor": {"id": "49f81b8c-e0df-4a53-87c6-69576be59651", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/49f81b8c-e0df-4a53-87c6-69576be59651"}]}, "created": "2026-01-27T19:28:53Z", "updated": "2026-01-27T19:29:19Z", "addresses": {"tempest-ServerActionsTestJSON-2004790355-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:de:a4:15"}, {"version": 4, "addr": "192.168.122.230", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:de:a4:15"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1789477663", "OS-SRV-USG:launched_at": "2026-01-27T19:29:19.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1123184984"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:32:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:33.610 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f79ddcc5-ee21-43e8-9d0d-60476a477361 used request id req-f67038a2-6c2d-4cc1-a3dd-2fc1a5d716fb request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:32:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:33.612 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f79ddcc5-ee21-43e8-9d0d-60476a477361', 'name': 'tempest-ServerActionsTestJSON-server-1826919436', 'flavor': {'id': '49f81b8c-e0df-4a53-87c6-69576be59651', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '50b0e23834964280a34973a87d80d1b8', 'user_id': '79532101c66342a980a90799ac41a442', 'hostId': 'b1181980f97b1ec08e2bf869b7f7812beda402ece77f78093638d8d3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:32:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:33.616 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 19:32:33 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:33.618 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ca8cedfba0afd817cd512161dc0139cec9b5135a1ea011f4957438b38f4f459a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 19:32:34 compute-0 sshd-session[251647]: Invalid user ubuntu from 156.227.233.86 port 46900
Jan 27 19:32:34 compute-0 podman[251650]: 2026-01-27 19:32:34.320772858 +0000 UTC m=+0.115216190 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 19:32:34 compute-0 podman[251649]: 2026-01-27 19:32:34.344392593 +0000 UTC m=+0.139240686 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:32:34 compute-0 sshd-session[251647]: Connection closed by invalid user ubuntu 156.227.233.86 port 46900 [preauth]
Jan 27 19:32:34 compute-0 podman[251651]: 2026-01-27 19:32:34.385562281 +0000 UTC m=+0.168037858 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.497 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 27 Jan 2026 19:32:33 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-89a40339-5ed1-4ea1-acb8-4f334e08d3d5 x-openstack-request-id: req-89a40339-5ed1-4ea1-acb8-4f334e08d3d5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.497 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "14a5dad2-3e22-42a5-bd6e-7255c6b09d8f", "name": "tempest-TestServerBasicOps-server-252932369", "status": "ACTIVE", "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "user_id": "d57ebe90e53d40899a8b3f3ce873df18", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "8d5ef883a6ce7ade6fef1c5befa43997991af0ec42f03bd558e08f8b", "image": {"id": "729797c6-2677-44bd-a4a8-949d1f57b0a2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/729797c6-2677-44bd-a4a8-949d1f57b0a2"}]}, "flavor": {"id": "49f81b8c-e0df-4a53-87c6-69576be59651", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/49f81b8c-e0df-4a53-87c6-69576be59651"}]}, "created": "2026-01-27T19:30:11Z", "updated": "2026-01-27T19:31:30Z", "addresses": {"tempest-TestServerBasicOps-587329447-network": [{"version": 4, "addr": "10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0c:85:f5"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-975359150", "OS-SRV-USG:launched_at": "2026-01-27T19:31:30.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1234210180"}, {"name": "tempest-securitygroup--168066066"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.497 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f used request id req-89a40339-5ed1-4ea1-acb8-4f334e08d3d5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.499 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '14a5dad2-3e22-42a5-bd6e-7255c6b09d8f', 'name': 'tempest-TestServerBasicOps-server-252932369', 'flavor': {'id': '49f81b8c-e0df-4a53-87c6-69576be59651', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2b8690906d754ad4b5878d33231c97f9', 'user_id': 'd57ebe90e53d40899a8b3f3ce873df18', 'hostId': '8d5ef883a6ce7ade6fef1c5befa43997991af0ec42f03bd558e08f8b', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.500 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:32:34.500334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.525 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.525 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.551 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.552 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.553 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:32:34.553292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.596 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/memory.usage volume: 42.6796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.636 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/memory.usage volume: 42.63671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.637 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.638 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.638 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:32:34.637896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.640 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:32:34.640571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.646 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f79ddcc5-ee21-43e8-9d0d-60476a477361 / tapeb2f2dfe-3f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.646 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.652 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f / tap2976aaab-c7 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.653 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.654 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.654 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:32:34.654869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.727 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.requests volume: 1094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.728 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.806 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.requests volume: 1120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.807 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.809 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.809 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.809 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.810 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.812 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.813 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.813 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.815 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:32:34.809449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:32:34.812621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.816 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.817 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.817 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.818 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.819 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:32:34.815985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.819 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.821 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/cpu volume: 36810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.821 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/cpu volume: 35010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.822 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:32:34.820449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.823 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.823 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.823 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.823 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.823 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.bytes volume: 30530048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.824 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:32:34.823534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.824 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.825 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.bytes volume: 30800384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.825 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.826 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.826 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.827 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.827 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:32:34.827464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.828 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.829 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.830 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.830 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.830 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.latency volume: 1248334478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.831 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.latency volume: 99676992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.831 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.latency volume: 2362261547 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.832 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.latency volume: 236045790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:32:34.830362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.833 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.834 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.834 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:32:34.834147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.836 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:32:34.836905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.837 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.838 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.839 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.839 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.840 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.840 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.841 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.841 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:32:34.840205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.842 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.843 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.843 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:32:34.844047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.844 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.845 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.845 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.846 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.846 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.846 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.846 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.846 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.latency volume: 7183658627 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.847 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:32:34.846476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.847 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.848 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.latency volume: 21584332032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.848 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.849 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.850 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.851 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.852 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:32:34.850402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.853 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.requests volume: 327 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.853 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.853 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.requests volume: 303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.853 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.856 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.856 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.856 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.857 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1826919436>, <NovaLikeServer: tempest-TestServerBasicOps-server-252932369>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1826919436>, <NovaLikeServer: tempest-TestServerBasicOps-server-252932369>]
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.857 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.858 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.858 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:32:34.853000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:32:34.855354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.859 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T19:32:34.856725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:32:34.857890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:32:34.859541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.860 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.861 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.bytes volume: 73089024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.861 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.861 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.bytes volume: 72933376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.861 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.862 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:32:34.860917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.863 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.bytes volume: 4475 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.863 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.864 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.864 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:32:34.863133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.865 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1826919436>, <NovaLikeServer: tempest-TestServerBasicOps-server-252932369>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1826919436>, <NovaLikeServer: tempest-TestServerBasicOps-server-252932369>]
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T19:32:34.864729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:34 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:32:34.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:32:35 compute-0 sshd-session[251717]: Invalid user ubuntu from 156.227.233.86 port 60960
Jan 27 19:32:35 compute-0 sshd-session[251717]: Connection closed by invalid user ubuntu 156.227.233.86 port 60960 [preauth]
Jan 27 19:32:36 compute-0 sshd-session[251719]: Invalid user ubuntu from 156.227.233.86 port 47104
Jan 27 19:32:36 compute-0 sshd-session[251719]: Connection closed by invalid user ubuntu 156.227.233.86 port 47104 [preauth]
Jan 27 19:32:37 compute-0 sshd-session[251721]: Invalid user ubuntu from 156.227.233.86 port 60658
Jan 27 19:32:37 compute-0 sshd-session[251721]: Connection closed by invalid user ubuntu 156.227.233.86 port 60658 [preauth]
Jan 27 19:32:37 compute-0 nova_compute[185480]: 2026-01-27 19:32:37.746 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:38 compute-0 sshd-session[251724]: Invalid user ubuntu from 156.227.233.86 port 51260
Jan 27 19:32:38 compute-0 sshd-session[251724]: Connection closed by invalid user ubuntu 156.227.233.86 port 51260 [preauth]
Jan 27 19:32:38 compute-0 nova_compute[185480]: 2026-01-27 19:32:38.447 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:39 compute-0 sshd-session[251726]: Invalid user ubuntu from 156.227.233.86 port 41494
Jan 27 19:32:39 compute-0 sshd-session[251726]: Connection closed by invalid user ubuntu 156.227.233.86 port 41494 [preauth]
Jan 27 19:32:40 compute-0 sshd-session[251728]: Invalid user ubuntu from 156.227.233.86 port 58522
Jan 27 19:32:40 compute-0 sshd-session[251728]: Connection closed by invalid user ubuntu 156.227.233.86 port 58522 [preauth]
Jan 27 19:32:41 compute-0 sshd-session[251730]: Invalid user ubuntu from 156.227.233.86 port 44298
Jan 27 19:32:41 compute-0 sshd-session[251730]: Connection closed by invalid user ubuntu 156.227.233.86 port 44298 [preauth]
Jan 27 19:32:42 compute-0 sshd-session[251732]: Invalid user ubuntu from 156.227.233.86 port 57200
Jan 27 19:32:42 compute-0 sshd-session[251732]: Connection closed by invalid user ubuntu 156.227.233.86 port 57200 [preauth]
Jan 27 19:32:42 compute-0 nova_compute[185480]: 2026-01-27 19:32:42.751 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:43 compute-0 sshd-session[251734]: Invalid user ubuntu from 156.227.233.86 port 43744
Jan 27 19:32:43 compute-0 nova_compute[185480]: 2026-01-27 19:32:43.450 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:43 compute-0 sshd-session[251734]: Connection closed by invalid user ubuntu 156.227.233.86 port 43744 [preauth]
Jan 27 19:32:44 compute-0 sshd-session[251736]: Invalid user ubuntu from 156.227.233.86 port 58438
Jan 27 19:32:44 compute-0 sshd-session[251736]: Connection closed by invalid user ubuntu 156.227.233.86 port 58438 [preauth]
Jan 27 19:32:45 compute-0 sshd-session[251738]: Invalid user ubuntu from 156.227.233.86 port 46644
Jan 27 19:32:45 compute-0 sshd-session[251738]: Connection closed by invalid user ubuntu 156.227.233.86 port 46644 [preauth]
Jan 27 19:32:46 compute-0 sshd-session[251740]: Invalid user ubuntu from 156.227.233.86 port 35342
Jan 27 19:32:46 compute-0 podman[251742]: 2026-01-27 19:32:46.637446802 +0000 UTC m=+0.086232224 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:32:46 compute-0 podman[251743]: 2026-01-27 19:32:46.65556057 +0000 UTC m=+0.099506522 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 19:32:46 compute-0 podman[251744]: 2026-01-27 19:32:46.68307234 +0000 UTC m=+0.120772148 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, io.openshift.tags=base rhel9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 19:32:46 compute-0 sshd-session[251740]: Connection closed by invalid user ubuntu 156.227.233.86 port 35342 [preauth]
Jan 27 19:32:47 compute-0 sshd-session[251801]: Invalid user ubuntu from 156.227.233.86 port 54254
Jan 27 19:32:47 compute-0 sshd-session[251801]: Connection closed by invalid user ubuntu 156.227.233.86 port 54254 [preauth]
Jan 27 19:32:47 compute-0 nova_compute[185480]: 2026-01-27 19:32:47.754 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:48 compute-0 nova_compute[185480]: 2026-01-27 19:32:48.452 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:48 compute-0 sshd-session[251803]: Invalid user ubuntu from 156.227.233.86 port 40424
Jan 27 19:32:48 compute-0 sshd-session[251803]: Connection closed by invalid user ubuntu 156.227.233.86 port 40424 [preauth]
Jan 27 19:32:49 compute-0 sshd-session[251805]: Invalid user ubuntu from 156.227.233.86 port 53374
Jan 27 19:32:49 compute-0 sshd-session[251805]: Connection closed by invalid user ubuntu 156.227.233.86 port 53374 [preauth]
Jan 27 19:32:50 compute-0 sshd-session[251808]: Invalid user ubuntu from 156.227.233.86 port 40362
Jan 27 19:32:50 compute-0 sshd-session[251808]: Connection closed by invalid user ubuntu 156.227.233.86 port 40362 [preauth]
Jan 27 19:32:51 compute-0 sshd-session[251810]: Invalid user ubuntu from 156.227.233.86 port 55628
Jan 27 19:32:51 compute-0 ovn_controller[97647]: 2026-01-27T19:32:51Z|00127|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 27 19:32:51 compute-0 sshd-session[251810]: Connection closed by invalid user ubuntu 156.227.233.86 port 55628 [preauth]
Jan 27 19:32:52 compute-0 sshd-session[251812]: Invalid user ubuntu from 156.227.233.86 port 46336
Jan 27 19:32:52 compute-0 nova_compute[185480]: 2026-01-27 19:32:52.759 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:52 compute-0 sshd-session[251812]: Connection closed by invalid user ubuntu 156.227.233.86 port 46336 [preauth]
Jan 27 19:32:53 compute-0 nova_compute[185480]: 2026-01-27 19:32:53.455 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:53 compute-0 sshd-session[251814]: Invalid user ubuntu from 156.227.233.86 port 35978
Jan 27 19:32:53 compute-0 sshd-session[251814]: Connection closed by invalid user ubuntu 156.227.233.86 port 35978 [preauth]
Jan 27 19:32:54 compute-0 sshd-session[251816]: Invalid user ubuntu from 156.227.233.86 port 50042
Jan 27 19:32:54 compute-0 sshd-session[251816]: Connection closed by invalid user ubuntu 156.227.233.86 port 50042 [preauth]
Jan 27 19:32:55 compute-0 sshd-session[251818]: Invalid user ubuntu from 156.227.233.86 port 39736
Jan 27 19:32:55 compute-0 sshd-session[251818]: Connection closed by invalid user ubuntu 156.227.233.86 port 39736 [preauth]
Jan 27 19:32:56 compute-0 podman[251822]: 2026-01-27 19:32:56.380995562 +0000 UTC m=+0.143662175 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41)
Jan 27 19:32:56 compute-0 nova_compute[185480]: 2026-01-27 19:32:56.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:32:56 compute-0 nova_compute[185480]: 2026-01-27 19:32:56.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:32:56 compute-0 sshd-session[251820]: Invalid user ubuntu from 156.227.233.86 port 53656
Jan 27 19:32:56 compute-0 sshd-session[251820]: Connection closed by invalid user ubuntu 156.227.233.86 port 53656 [preauth]
Jan 27 19:32:57 compute-0 nova_compute[185480]: 2026-01-27 19:32:57.486 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:32:57 compute-0 nova_compute[185480]: 2026-01-27 19:32:57.487 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:32:57 compute-0 nova_compute[185480]: 2026-01-27 19:32:57.488 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:32:57 compute-0 nova_compute[185480]: 2026-01-27 19:32:57.762 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:57 compute-0 sshd-session[251842]: Invalid user ubuntu from 156.227.233.86 port 41662
Jan 27 19:32:57 compute-0 sshd-session[251842]: Connection closed by invalid user ubuntu 156.227.233.86 port 41662 [preauth]
Jan 27 19:32:58 compute-0 nova_compute[185480]: 2026-01-27 19:32:58.458 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:32:58 compute-0 sshd-session[251844]: Invalid user ubuntu from 156.227.233.86 port 57082
Jan 27 19:32:59 compute-0 sshd-session[251844]: Connection closed by invalid user ubuntu 156.227.233.86 port 57082 [preauth]
Jan 27 19:32:59 compute-0 nova_compute[185480]: 2026-01-27 19:32:59.662 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Updating instance_info_cache with network_info: [{"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:32:59 compute-0 nova_compute[185480]: 2026-01-27 19:32:59.686 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:32:59 compute-0 nova_compute[185480]: 2026-01-27 19:32:59.687 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:32:59 compute-0 podman[201378]: time="2026-01-27T19:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:32:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 19:32:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4841 "" "Go-http-client/1.1"
Jan 27 19:32:59 compute-0 sshd-session[251846]: Invalid user ubuntu from 156.227.233.86 port 40172
Jan 27 19:33:00 compute-0 sshd-session[251846]: Connection closed by invalid user ubuntu 156.227.233.86 port 40172 [preauth]
Jan 27 19:33:00 compute-0 sshd-session[251848]: Invalid user ubuntu from 156.227.233.86 port 56292
Jan 27 19:33:01 compute-0 sshd-session[251848]: Connection closed by invalid user ubuntu 156.227.233.86 port 56292 [preauth]
Jan 27 19:33:01 compute-0 openstack_network_exporter[204477]: ERROR   19:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:33:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:33:01 compute-0 openstack_network_exporter[204477]: ERROR   19:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:33:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:33:01 compute-0 sshd-session[251850]: Invalid user ubuntu from 156.227.233.86 port 39038
Jan 27 19:33:02 compute-0 podman[251852]: 2026-01-27 19:33:02.041114696 +0000 UTC m=+0.083330251 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 19:33:02 compute-0 sshd-session[251850]: Connection closed by invalid user ubuntu 156.227.233.86 port 39038 [preauth]
Jan 27 19:33:02 compute-0 nova_compute[185480]: 2026-01-27 19:33:02.777 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:02 compute-0 sshd-session[251872]: Invalid user ubuntu from 156.227.233.86 port 53214
Jan 27 19:33:03 compute-0 sshd-session[251872]: Connection closed by invalid user ubuntu 156.227.233.86 port 53214 [preauth]
Jan 27 19:33:03 compute-0 nova_compute[185480]: 2026-01-27 19:33:03.461 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:03 compute-0 nova_compute[185480]: 2026-01-27 19:33:03.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:03 compute-0 sshd-session[251874]: Invalid user ubuntu from 156.227.233.86 port 41376
Jan 27 19:33:04 compute-0 sshd-session[251874]: Connection closed by invalid user ubuntu 156.227.233.86 port 41376 [preauth]
Jan 27 19:33:05 compute-0 sshd-session[251876]: Invalid user ubuntu from 156.227.233.86 port 55988
Jan 27 19:33:05 compute-0 podman[251878]: 2026-01-27 19:33:05.107636334 +0000 UTC m=+0.089145995 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:33:05 compute-0 podman[251880]: 2026-01-27 19:33:05.131122425 +0000 UTC m=+0.100569818 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:33:05 compute-0 podman[251879]: 2026-01-27 19:33:05.152032422 +0000 UTC m=+0.123597427 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Jan 27 19:33:05 compute-0 sshd-session[251876]: Connection closed by invalid user ubuntu 156.227.233.86 port 55988 [preauth]
Jan 27 19:33:06 compute-0 sshd-session[251943]: Invalid user ubuntu from 156.227.233.86 port 41928
Jan 27 19:33:06 compute-0 sshd-session[251943]: Connection closed by invalid user ubuntu 156.227.233.86 port 41928 [preauth]
Jan 27 19:33:07 compute-0 sshd-session[251945]: Invalid user ubuntu from 156.227.233.86 port 54126
Jan 27 19:33:07 compute-0 sshd-session[251945]: Connection closed by invalid user ubuntu 156.227.233.86 port 54126 [preauth]
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.549 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.551 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.552 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.553 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.650 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.749 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.751 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.783 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.833 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.845 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.911 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.913 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:33:07 compute-0 nova_compute[185480]: 2026-01-27 19:33:07.983 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:33:08 compute-0 sshd-session[251947]: Invalid user ubuntu from 156.227.233.86 port 40072
Jan 27 19:33:08 compute-0 sshd-session[251947]: Connection closed by invalid user ubuntu 156.227.233.86 port 40072 [preauth]
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.466 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.509 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.511 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=72.31868743896484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.511 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.512 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.607 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance f79ddcc5-ee21-43e8-9d0d-60476a477361 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.608 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.608 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.609 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.682 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.698 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.701 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:33:08 compute-0 nova_compute[185480]: 2026-01-27 19:33:08.702 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:33:09 compute-0 sshd-session[251961]: Invalid user ubuntu from 156.227.233.86 port 54610
Jan 27 19:33:09 compute-0 sshd-session[251961]: Connection closed by invalid user ubuntu 156.227.233.86 port 54610 [preauth]
Jan 27 19:33:10 compute-0 sshd-session[251963]: Invalid user ubuntu from 156.227.233.86 port 41334
Jan 27 19:33:10 compute-0 sshd-session[251963]: Connection closed by invalid user ubuntu 156.227.233.86 port 41334 [preauth]
Jan 27 19:33:10 compute-0 nova_compute[185480]: 2026-01-27 19:33:10.701 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:10 compute-0 nova_compute[185480]: 2026-01-27 19:33:10.702 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:33:11 compute-0 sshd-session[251965]: Invalid user ubuntu from 156.227.233.86 port 58426
Jan 27 19:33:11 compute-0 sshd-session[251965]: Connection closed by invalid user ubuntu 156.227.233.86 port 58426 [preauth]
Jan 27 19:33:11 compute-0 nova_compute[185480]: 2026-01-27 19:33:11.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:11 compute-0 nova_compute[185480]: 2026-01-27 19:33:11.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:12 compute-0 sshd-session[251967]: Invalid user ubuntu from 156.227.233.86 port 45678
Jan 27 19:33:12 compute-0 sshd-session[251967]: Connection closed by invalid user ubuntu 156.227.233.86 port 45678 [preauth]
Jan 27 19:33:12 compute-0 nova_compute[185480]: 2026-01-27 19:33:12.789 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:13 compute-0 nova_compute[185480]: 2026-01-27 19:33:13.469 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:14 compute-0 sshd-session[251969]: Invalid user ubuntu from 156.227.233.86 port 34730
Jan 27 19:33:14 compute-0 sshd-session[251969]: Connection closed by invalid user ubuntu 156.227.233.86 port 34730 [preauth]
Jan 27 19:33:14 compute-0 nova_compute[185480]: 2026-01-27 19:33:14.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:14 compute-0 nova_compute[185480]: 2026-01-27 19:33:14.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:15 compute-0 sshd-session[251971]: Invalid user ubuntu from 156.227.233.86 port 41484
Jan 27 19:33:15 compute-0 sshd-session[251971]: Connection closed by invalid user ubuntu 156.227.233.86 port 41484 [preauth]
Jan 27 19:33:16 compute-0 sshd-session[251973]: Invalid user ubuntu from 156.227.233.86 port 58012
Jan 27 19:33:16 compute-0 sshd-session[251973]: Connection closed by invalid user ubuntu 156.227.233.86 port 58012 [preauth]
Jan 27 19:33:17 compute-0 podman[251977]: 2026-01-27 19:33:17.355953178 +0000 UTC m=+0.115429266 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 19:33:17 compute-0 sshd-session[251975]: Invalid user ubuntu from 156.227.233.86 port 45276
Jan 27 19:33:17 compute-0 podman[251978]: 2026-01-27 19:33:17.375988984 +0000 UTC m=+0.128101710 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:33:17 compute-0 podman[251979]: 2026-01-27 19:33:17.388446272 +0000 UTC m=+0.135424951 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=kepler, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release-0.7.12=, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, version=9.4, architecture=x86_64)
Jan 27 19:33:17 compute-0 sshd-session[251975]: Connection closed by invalid user ubuntu 156.227.233.86 port 45276 [preauth]
Jan 27 19:33:17 compute-0 nova_compute[185480]: 2026-01-27 19:33:17.793 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:18 compute-0 sshd-session[252037]: Invalid user ubuntu from 156.227.233.86 port 33004
Jan 27 19:33:18 compute-0 nova_compute[185480]: 2026-01-27 19:33:18.473 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:18 compute-0 sshd-session[252037]: Connection closed by invalid user ubuntu 156.227.233.86 port 33004 [preauth]
Jan 27 19:33:19 compute-0 sshd-session[252039]: Invalid user ubuntu from 156.227.233.86 port 50650
Jan 27 19:33:19 compute-0 sshd-session[252039]: Connection closed by invalid user ubuntu 156.227.233.86 port 50650 [preauth]
Jan 27 19:33:20 compute-0 sshd-session[252042]: Invalid user ubuntu from 156.227.233.86 port 38360
Jan 27 19:33:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:33:20.546 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:33:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:33:20.547 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:33:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:33:20.548 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:33:20 compute-0 sshd-session[252042]: Connection closed by invalid user ubuntu 156.227.233.86 port 38360 [preauth]
Jan 27 19:33:21 compute-0 sshd-session[252044]: Invalid user ubuntu from 156.227.233.86 port 53720
Jan 27 19:33:21 compute-0 sshd-session[252044]: Connection closed by invalid user ubuntu 156.227.233.86 port 53720 [preauth]
Jan 27 19:33:22 compute-0 sshd-session[252046]: Invalid user ubuntu from 156.227.233.86 port 39698
Jan 27 19:33:22 compute-0 sshd-session[252046]: Connection closed by invalid user ubuntu 156.227.233.86 port 39698 [preauth]
Jan 27 19:33:22 compute-0 nova_compute[185480]: 2026-01-27 19:33:22.798 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:23 compute-0 nova_compute[185480]: 2026-01-27 19:33:23.477 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:23 compute-0 sshd-session[252048]: Invalid user ubuntu from 156.227.233.86 port 54520
Jan 27 19:33:23 compute-0 sshd-session[252048]: Connection closed by invalid user ubuntu 156.227.233.86 port 54520 [preauth]
Jan 27 19:33:24 compute-0 sshd-session[252050]: Invalid user ubuntu from 156.227.233.86 port 42178
Jan 27 19:33:24 compute-0 sshd-session[252050]: Connection closed by invalid user ubuntu 156.227.233.86 port 42178 [preauth]
Jan 27 19:33:26 compute-0 sshd-session[252052]: Invalid user ubuntu from 156.227.233.86 port 56058
Jan 27 19:33:26 compute-0 sshd-session[252052]: Connection closed by invalid user ubuntu 156.227.233.86 port 56058 [preauth]
Jan 27 19:33:27 compute-0 sshd-session[252054]: Invalid user ubuntu from 156.227.233.86 port 49014
Jan 27 19:33:27 compute-0 sshd-session[252054]: Connection closed by invalid user ubuntu 156.227.233.86 port 49014 [preauth]
Jan 27 19:33:27 compute-0 podman[252056]: 2026-01-27 19:33:27.310298472 +0000 UTC m=+0.143943612 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 19:33:27 compute-0 nova_compute[185480]: 2026-01-27 19:33:27.802 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:28 compute-0 sshd-session[252078]: Invalid user ubuntu from 156.227.233.86 port 33436
Jan 27 19:33:28 compute-0 sshd-session[252078]: Connection closed by invalid user ubuntu 156.227.233.86 port 33436 [preauth]
Jan 27 19:33:28 compute-0 nova_compute[185480]: 2026-01-27 19:33:28.480 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:29 compute-0 sshd-session[252080]: Invalid user ubuntu from 156.227.233.86 port 46394
Jan 27 19:33:29 compute-0 sshd-session[252080]: Connection closed by invalid user ubuntu 156.227.233.86 port 46394 [preauth]
Jan 27 19:33:29 compute-0 podman[201378]: time="2026-01-27T19:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:33:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 19:33:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4840 "" "Go-http-client/1.1"
Jan 27 19:33:30 compute-0 sshd-session[252082]: Invalid user ubuntu from 156.227.233.86 port 58600
Jan 27 19:33:30 compute-0 sshd-session[252082]: Connection closed by invalid user ubuntu 156.227.233.86 port 58600 [preauth]
Jan 27 19:33:30 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 19:33:31 compute-0 sshd-session[252084]: Invalid user ubuntu from 156.227.233.86 port 46084
Jan 27 19:33:31 compute-0 openstack_network_exporter[204477]: ERROR   19:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:33:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:33:31 compute-0 openstack_network_exporter[204477]: ERROR   19:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:33:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:33:31 compute-0 sshd-session[252084]: Connection closed by invalid user ubuntu 156.227.233.86 port 46084 [preauth]
Jan 27 19:33:32 compute-0 sshd-session[252087]: Invalid user ubuntu from 156.227.233.86 port 33566
Jan 27 19:33:32 compute-0 podman[252089]: 2026-01-27 19:33:32.370297893 +0000 UTC m=+0.137153523 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 27 19:33:32 compute-0 sshd-session[252087]: Connection closed by invalid user ubuntu 156.227.233.86 port 33566 [preauth]
Jan 27 19:33:32 compute-0 nova_compute[185480]: 2026-01-27 19:33:32.806 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:33 compute-0 sshd-session[252111]: Invalid user ubuntu from 156.227.233.86 port 50028
Jan 27 19:33:33 compute-0 nova_compute[185480]: 2026-01-27 19:33:33.483 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:33 compute-0 sshd-session[252111]: Connection closed by invalid user ubuntu 156.227.233.86 port 50028 [preauth]
Jan 27 19:33:34 compute-0 sshd-session[252113]: Invalid user ubuntu from 156.227.233.86 port 36060
Jan 27 19:33:34 compute-0 sshd-session[252113]: Connection closed by invalid user ubuntu 156.227.233.86 port 36060 [preauth]
Jan 27 19:33:35 compute-0 podman[252117]: 2026-01-27 19:33:35.364134708 +0000 UTC m=+0.111920819 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:33:35 compute-0 podman[252119]: 2026-01-27 19:33:35.37433263 +0000 UTC m=+0.105010838 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 19:33:35 compute-0 sshd-session[252115]: Invalid user ubuntu from 156.227.233.86 port 50190
Jan 27 19:33:35 compute-0 podman[252118]: 2026-01-27 19:33:35.414215537 +0000 UTC m=+0.155995929 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 19:33:35 compute-0 sshd-session[252115]: Connection closed by invalid user ubuntu 156.227.233.86 port 50190 [preauth]
Jan 27 19:33:36 compute-0 sshd-session[252181]: Invalid user ubuntu from 156.227.233.86 port 36854
Jan 27 19:33:36 compute-0 sshd-session[252181]: Connection closed by invalid user ubuntu 156.227.233.86 port 36854 [preauth]
Jan 27 19:33:37 compute-0 sshd-session[252183]: Invalid user ubuntu from 156.227.233.86 port 50058
Jan 27 19:33:37 compute-0 sshd-session[252183]: Connection closed by invalid user ubuntu 156.227.233.86 port 50058 [preauth]
Jan 27 19:33:37 compute-0 nova_compute[185480]: 2026-01-27 19:33:37.808 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:38 compute-0 sshd-session[252185]: Invalid user ubuntu from 156.227.233.86 port 36126
Jan 27 19:33:38 compute-0 nova_compute[185480]: 2026-01-27 19:33:38.486 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:38 compute-0 sshd-session[252185]: Connection closed by invalid user ubuntu 156.227.233.86 port 36126 [preauth]
Jan 27 19:33:39 compute-0 sshd-session[252187]: Invalid user ubuntu from 156.227.233.86 port 50862
Jan 27 19:33:39 compute-0 sshd-session[252187]: Connection closed by invalid user ubuntu 156.227.233.86 port 50862 [preauth]
Jan 27 19:33:40 compute-0 sshd-session[252189]: Invalid user ubuntu from 156.227.233.86 port 37356
Jan 27 19:33:40 compute-0 sshd-session[252189]: Connection closed by invalid user ubuntu 156.227.233.86 port 37356 [preauth]
Jan 27 19:33:41 compute-0 sshd-session[252191]: Invalid user ubuntu from 156.227.233.86 port 50834
Jan 27 19:33:41 compute-0 sshd-session[252191]: Connection closed by invalid user ubuntu 156.227.233.86 port 50834 [preauth]
Jan 27 19:33:42 compute-0 sshd-session[252193]: Invalid user ubuntu from 156.227.233.86 port 37460
Jan 27 19:33:42 compute-0 sshd-session[252193]: Connection closed by invalid user ubuntu 156.227.233.86 port 37460 [preauth]
Jan 27 19:33:42 compute-0 nova_compute[185480]: 2026-01-27 19:33:42.814 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:43 compute-0 nova_compute[185480]: 2026-01-27 19:33:43.491 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:43 compute-0 sshd-session[252195]: Invalid user ubuntu from 156.227.233.86 port 53750
Jan 27 19:33:43 compute-0 sshd-session[252195]: Connection closed by invalid user ubuntu 156.227.233.86 port 53750 [preauth]
Jan 27 19:33:44 compute-0 sshd-session[252197]: Invalid user ubuntu from 156.227.233.86 port 42044
Jan 27 19:33:44 compute-0 sshd-session[252197]: Connection closed by invalid user ubuntu 156.227.233.86 port 42044 [preauth]
Jan 27 19:33:45 compute-0 sshd-session[252199]: Invalid user ubuntu from 156.227.233.86 port 53722
Jan 27 19:33:45 compute-0 sshd-session[252199]: Connection closed by invalid user ubuntu 156.227.233.86 port 53722 [preauth]
Jan 27 19:33:46 compute-0 sshd-session[252201]: Invalid user ubuntu from 156.227.233.86 port 38140
Jan 27 19:33:46 compute-0 sshd-session[252201]: Connection closed by invalid user ubuntu 156.227.233.86 port 38140 [preauth]
Jan 27 19:33:47 compute-0 sshd-session[252203]: Invalid user ubuntu from 156.227.233.86 port 51838
Jan 27 19:33:47 compute-0 nova_compute[185480]: 2026-01-27 19:33:47.818 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:47 compute-0 podman[252205]: 2026-01-27 19:33:47.842997351 +0000 UTC m=+0.136592820 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:33:47 compute-0 podman[252206]: 2026-01-27 19:33:47.845913573 +0000 UTC m=+0.120811360 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 19:33:47 compute-0 sshd-session[252203]: Connection closed by invalid user ubuntu 156.227.233.86 port 51838 [preauth]
Jan 27 19:33:47 compute-0 podman[252210]: 2026-01-27 19:33:47.869183118 +0000 UTC m=+0.130956100 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, release-0.7.12=, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 19:33:48 compute-0 nova_compute[185480]: 2026-01-27 19:33:48.494 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:48 compute-0 sshd-session[252268]: Invalid user ubuntu from 156.227.233.86 port 38108
Jan 27 19:33:48 compute-0 sshd-session[252268]: Connection closed by invalid user ubuntu 156.227.233.86 port 38108 [preauth]
Jan 27 19:33:49 compute-0 sshd-session[252270]: Invalid user debian from 156.227.233.86 port 55404
Jan 27 19:33:49 compute-0 sshd-session[252270]: Connection closed by invalid user debian 156.227.233.86 port 55404 [preauth]
Jan 27 19:33:50 compute-0 sshd-session[252273]: Invalid user debian from 156.227.233.86 port 41520
Jan 27 19:33:50 compute-0 sshd-session[252273]: Connection closed by invalid user debian 156.227.233.86 port 41520 [preauth]
Jan 27 19:33:51 compute-0 sshd-session[252275]: Invalid user debian from 156.227.233.86 port 56014
Jan 27 19:33:52 compute-0 sshd-session[252275]: Connection closed by invalid user debian 156.227.233.86 port 56014 [preauth]
Jan 27 19:33:52 compute-0 nova_compute[185480]: 2026-01-27 19:33:52.821 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:52 compute-0 sshd-session[252277]: Invalid user debian from 156.227.233.86 port 42262
Jan 27 19:33:53 compute-0 sshd-session[252277]: Connection closed by invalid user debian 156.227.233.86 port 42262 [preauth]
Jan 27 19:33:53 compute-0 nova_compute[185480]: 2026-01-27 19:33:53.498 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:53 compute-0 sshd-session[252279]: Invalid user debian from 156.227.233.86 port 57434
Jan 27 19:33:54 compute-0 sshd-session[252279]: Connection closed by invalid user debian 156.227.233.86 port 57434 [preauth]
Jan 27 19:33:54 compute-0 sshd-session[252281]: Invalid user debian from 156.227.233.86 port 42072
Jan 27 19:33:55 compute-0 sshd-session[252281]: Connection closed by invalid user debian 156.227.233.86 port 42072 [preauth]
Jan 27 19:33:55 compute-0 sshd-session[252283]: Invalid user debian from 156.227.233.86 port 55178
Jan 27 19:33:56 compute-0 sshd-session[252283]: Connection closed by invalid user debian 156.227.233.86 port 55178 [preauth]
Jan 27 19:33:56 compute-0 sshd-session[252285]: Invalid user debian from 156.227.233.86 port 40568
Jan 27 19:33:57 compute-0 sshd-session[252285]: Connection closed by invalid user debian 156.227.233.86 port 40568 [preauth]
Jan 27 19:33:57 compute-0 nova_compute[185480]: 2026-01-27 19:33:57.827 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:57 compute-0 sshd-session[252287]: Invalid user debian from 156.227.233.86 port 51596
Jan 27 19:33:58 compute-0 podman[252289]: 2026-01-27 19:33:58.09502515 +0000 UTC m=+0.129368942 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9)
Jan 27 19:33:58 compute-0 sshd-session[252287]: Connection closed by invalid user debian 156.227.233.86 port 51596 [preauth]
Jan 27 19:33:58 compute-0 nova_compute[185480]: 2026-01-27 19:33:58.499 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:33:58 compute-0 nova_compute[185480]: 2026-01-27 19:33:58.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:33:58 compute-0 nova_compute[185480]: 2026-01-27 19:33:58.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:33:58 compute-0 nova_compute[185480]: 2026-01-27 19:33:58.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 19:33:58 compute-0 sshd-session[252309]: Invalid user debian from 156.227.233.86 port 37190
Jan 27 19:33:59 compute-0 sshd-session[252309]: Connection closed by invalid user debian 156.227.233.86 port 37190 [preauth]
Jan 27 19:33:59 compute-0 nova_compute[185480]: 2026-01-27 19:33:59.509 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:33:59 compute-0 nova_compute[185480]: 2026-01-27 19:33:59.510 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:33:59 compute-0 nova_compute[185480]: 2026-01-27 19:33:59.510 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:33:59 compute-0 nova_compute[185480]: 2026-01-27 19:33:59.511 185484 DEBUG nova.objects.instance [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lazy-loading 'info_cache' on Instance uuid f79ddcc5-ee21-43e8-9d0d-60476a477361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 19:33:59 compute-0 podman[201378]: time="2026-01-27T19:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:33:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 19:33:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4842 "" "Go-http-client/1.1"
Jan 27 19:33:59 compute-0 sshd-session[252311]: Invalid user debian from 156.227.233.86 port 51542
Jan 27 19:34:00 compute-0 sshd-session[252311]: Connection closed by invalid user debian 156.227.233.86 port 51542 [preauth]
Jan 27 19:34:00 compute-0 sshd-session[252313]: Invalid user debian from 156.227.233.86 port 40018
Jan 27 19:34:01 compute-0 sshd-session[252313]: Connection closed by invalid user debian 156.227.233.86 port 40018 [preauth]
Jan 27 19:34:01 compute-0 openstack_network_exporter[204477]: ERROR   19:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:34:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:34:01 compute-0 openstack_network_exporter[204477]: ERROR   19:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:34:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:34:02 compute-0 sshd-session[252315]: Invalid user debian from 156.227.233.86 port 55010
Jan 27 19:34:02 compute-0 sshd-session[252315]: Connection closed by invalid user debian 156.227.233.86 port 55010 [preauth]
Jan 27 19:34:02 compute-0 nova_compute[185480]: 2026-01-27 19:34:02.550 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updating instance_info_cache with network_info: [{"id": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "address": "fa:16:3e:de:a4:15", "network": {"id": "47175578-eb32-4720-93c5-05fa0d34701f", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2004790355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50b0e23834964280a34973a87d80d1b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2f2dfe-3f", "ovs_interfaceid": "eb2f2dfe-3f62-4bba-8586-84ee449f5ae5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:34:02 compute-0 nova_compute[185480]: 2026-01-27 19:34:02.577 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-f79ddcc5-ee21-43e8-9d0d-60476a477361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:34:02 compute-0 nova_compute[185480]: 2026-01-27 19:34:02.577 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: f79ddcc5-ee21-43e8-9d0d-60476a477361] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:34:02 compute-0 nova_compute[185480]: 2026-01-27 19:34:02.832 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:03 compute-0 sshd-session[252317]: Invalid user debian from 156.227.233.86 port 40956
Jan 27 19:34:03 compute-0 podman[252319]: 2026-01-27 19:34:03.258212603 +0000 UTC m=+0.145919360 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 27 19:34:03 compute-0 sshd-session[252317]: Connection closed by invalid user debian 156.227.233.86 port 40956 [preauth]
Jan 27 19:34:03 compute-0 nova_compute[185480]: 2026-01-27 19:34:03.502 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:03 compute-0 nova_compute[185480]: 2026-01-27 19:34:03.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:04 compute-0 sshd-session[252338]: Invalid user debian from 156.227.233.86 port 55632
Jan 27 19:34:04 compute-0 sshd-session[252338]: Connection closed by invalid user debian 156.227.233.86 port 55632 [preauth]
Jan 27 19:34:05 compute-0 sshd-session[252340]: Invalid user debian from 156.227.233.86 port 42274
Jan 27 19:34:05 compute-0 sshd-session[252340]: Connection closed by invalid user debian 156.227.233.86 port 42274 [preauth]
Jan 27 19:34:06 compute-0 sshd-session[252342]: Invalid user debian from 156.227.233.86 port 58180
Jan 27 19:34:06 compute-0 podman[252344]: 2026-01-27 19:34:06.352905285 +0000 UTC m=+0.111093830 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 19:34:06 compute-0 podman[252346]: 2026-01-27 19:34:06.364315377 +0000 UTC m=+0.103982503 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 19:34:06 compute-0 sshd-session[252342]: Connection closed by invalid user debian 156.227.233.86 port 58180 [preauth]
Jan 27 19:34:06 compute-0 podman[252345]: 2026-01-27 19:34:06.449117584 +0000 UTC m=+0.199261089 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 19:34:07 compute-0 sshd-session[252408]: Invalid user debian from 156.227.233.86 port 44948
Jan 27 19:34:07 compute-0 sshd-session[252408]: Connection closed by invalid user debian 156.227.233.86 port 44948 [preauth]
Jan 27 19:34:07 compute-0 nova_compute[185480]: 2026-01-27 19:34:07.836 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:08 compute-0 sshd-session[252410]: Invalid user debian from 156.227.233.86 port 57336
Jan 27 19:34:08 compute-0 sshd-session[252410]: Connection closed by invalid user debian 156.227.233.86 port 57336 [preauth]
Jan 27 19:34:08 compute-0 nova_compute[185480]: 2026-01-27 19:34:08.506 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:09 compute-0 sshd-session[252412]: Invalid user debian from 156.227.233.86 port 43872
Jan 27 19:34:09 compute-0 sshd-session[252412]: Connection closed by invalid user debian 156.227.233.86 port 43872 [preauth]
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.567 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.568 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.569 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.570 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.687 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.794 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.796 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.862 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f79ddcc5-ee21-43e8-9d0d-60476a477361/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.877 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.978 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:34:09 compute-0 nova_compute[185480]: 2026-01-27 19:34:09.980 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.077 185484 DEBUG oslo_concurrency.processutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 19:34:10 compute-0 sshd-session[252414]: Invalid user debian from 156.227.233.86 port 59342
Jan 27 19:34:10 compute-0 sshd-session[252414]: Connection closed by invalid user debian 156.227.233.86 port 59342 [preauth]
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.731 185484 WARNING nova.virt.libvirt.driver [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.733 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4883MB free_disk=72.3187141418457GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.733 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.734 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.861 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance f79ddcc5-ee21-43e8-9d0d-60476a477361 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.862 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Instance 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.863 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.863 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.946 185484 DEBUG nova.compute.provider_tree [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed in ProviderTree for provider: 8877e97b-aaf6-4210-a385-0f49c1a02906 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.962 185484 DEBUG nova.scheduler.client.report [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Inventory has not changed for provider 8877e97b-aaf6-4210-a385-0f49c1a02906 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.965 185484 DEBUG nova.compute.resource_tracker [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 19:34:10 compute-0 nova_compute[185480]: 2026-01-27 19:34:10.965 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:34:11 compute-0 sshd-session[252428]: Invalid user debian from 156.227.233.86 port 46678
Jan 27 19:34:11 compute-0 sshd-session[252428]: Connection closed by invalid user debian 156.227.233.86 port 46678 [preauth]
Jan 27 19:34:11 compute-0 nova_compute[185480]: 2026-01-27 19:34:11.966 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:11 compute-0 nova_compute[185480]: 2026-01-27 19:34:11.968 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 19:34:12 compute-0 sshd-session[252430]: Invalid user debian from 156.227.233.86 port 35730
Jan 27 19:34:12 compute-0 sshd-session[252430]: Connection closed by invalid user debian 156.227.233.86 port 35730 [preauth]
Jan 27 19:34:12 compute-0 nova_compute[185480]: 2026-01-27 19:34:12.840 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:13 compute-0 sshd-session[252432]: Invalid user debian from 156.227.233.86 port 54482
Jan 27 19:34:13 compute-0 nova_compute[185480]: 2026-01-27 19:34:13.509 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:13 compute-0 nova_compute[185480]: 2026-01-27 19:34:13.516 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:13 compute-0 nova_compute[185480]: 2026-01-27 19:34:13.517 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:13 compute-0 sshd-session[252432]: Connection closed by invalid user debian 156.227.233.86 port 54482 [preauth]
Jan 27 19:34:14 compute-0 sshd-session[252434]: Invalid user debian from 156.227.233.86 port 39366
Jan 27 19:34:14 compute-0 nova_compute[185480]: 2026-01-27 19:34:14.510 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:14 compute-0 sshd-session[252434]: Connection closed by invalid user debian 156.227.233.86 port 39366 [preauth]
Jan 27 19:34:15 compute-0 sshd-session[252436]: Invalid user debian from 156.227.233.86 port 51284
Jan 27 19:34:15 compute-0 nova_compute[185480]: 2026-01-27 19:34:15.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:15 compute-0 sshd-session[252436]: Connection closed by invalid user debian 156.227.233.86 port 51284 [preauth]
Jan 27 19:34:16 compute-0 sshd-session[252438]: Invalid user debian from 156.227.233.86 port 34632
Jan 27 19:34:16 compute-0 nova_compute[185480]: 2026-01-27 19:34:16.514 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:34:16 compute-0 sshd-session[252438]: Connection closed by invalid user debian 156.227.233.86 port 34632 [preauth]
Jan 27 19:34:17 compute-0 nova_compute[185480]: 2026-01-27 19:34:17.845 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:18 compute-0 podman[252442]: 2026-01-27 19:34:18.309110501 +0000 UTC m=+0.075167790 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 19:34:18 compute-0 podman[252443]: 2026-01-27 19:34:18.321376255 +0000 UTC m=+0.081764384 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 27 19:34:18 compute-0 podman[252444]: 2026-01-27 19:34:18.353259773 +0000 UTC m=+0.098863006 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, name=ubi9)
Jan 27 19:34:18 compute-0 nova_compute[185480]: 2026-01-27 19:34:18.512 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:18 compute-0 sshd-session[252440]: Invalid user debian from 156.227.233.86 port 51144
Jan 27 19:34:18 compute-0 sshd-session[252440]: Connection closed by invalid user debian 156.227.233.86 port 51144 [preauth]
Jan 27 19:34:19 compute-0 sshd-session[252503]: Invalid user debian from 156.227.233.86 port 59608
Jan 27 19:34:19 compute-0 sshd-session[252503]: Connection closed by invalid user debian 156.227.233.86 port 59608 [preauth]
Jan 27 19:34:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:34:20.547 106898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 19:34:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:34:20.548 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 19:34:20 compute-0 ovn_metadata_agent[106893]: 2026-01-27 19:34:20.549 106898 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 19:34:20 compute-0 sshd-session[252506]: Invalid user debian from 156.227.233.86 port 46292
Jan 27 19:34:20 compute-0 sshd-session[252506]: Connection closed by invalid user debian 156.227.233.86 port 46292 [preauth]
Jan 27 19:34:21 compute-0 sshd-session[252508]: Invalid user debian from 156.227.233.86 port 60744
Jan 27 19:34:21 compute-0 sshd-session[252508]: Connection closed by invalid user debian 156.227.233.86 port 60744 [preauth]
Jan 27 19:34:22 compute-0 sshd-session[252510]: Invalid user debian from 156.227.233.86 port 45820
Jan 27 19:34:22 compute-0 nova_compute[185480]: 2026-01-27 19:34:22.849 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:22 compute-0 sshd-session[252510]: Connection closed by invalid user debian 156.227.233.86 port 45820 [preauth]
Jan 27 19:34:23 compute-0 nova_compute[185480]: 2026-01-27 19:34:23.514 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:23 compute-0 sshd-session[252512]: Invalid user debian from 156.227.233.86 port 33382
Jan 27 19:34:23 compute-0 sshd-session[252512]: Connection closed by invalid user debian 156.227.233.86 port 33382 [preauth]
Jan 27 19:34:26 compute-0 sshd-session[252514]: Invalid user debian from 156.227.233.86 port 49910
Jan 27 19:34:26 compute-0 sshd-session[252514]: Connection closed by invalid user debian 156.227.233.86 port 49910 [preauth]
Jan 27 19:34:27 compute-0 sshd-session[252516]: Invalid user debian from 156.227.233.86 port 35352
Jan 27 19:34:27 compute-0 sshd-session[252516]: Connection closed by invalid user debian 156.227.233.86 port 35352 [preauth]
Jan 27 19:34:27 compute-0 nova_compute[185480]: 2026-01-27 19:34:27.858 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:28 compute-0 podman[252520]: 2026-01-27 19:34:28.331262283 +0000 UTC m=+0.096841146 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, version=9.6, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Jan 27 19:34:28 compute-0 sshd-session[252518]: Invalid user debian from 156.227.233.86 port 53448
Jan 27 19:34:28 compute-0 nova_compute[185480]: 2026-01-27 19:34:28.517 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:28 compute-0 sshd-session[252518]: Connection closed by invalid user debian 156.227.233.86 port 53448 [preauth]
Jan 27 19:34:29 compute-0 sshd-session[252540]: Invalid user debian from 156.227.233.86 port 40988
Jan 27 19:34:29 compute-0 sshd-session[252540]: Connection closed by invalid user debian 156.227.233.86 port 40988 [preauth]
Jan 27 19:34:29 compute-0 podman[201378]: time="2026-01-27T19:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:34:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 19:34:29 compute-0 podman[201378]: @ - - [27/Jan/2026:19:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4841 "" "Go-http-client/1.1"
Jan 27 19:34:30 compute-0 sshd-session[252542]: Invalid user debian from 156.227.233.86 port 54846
Jan 27 19:34:30 compute-0 sshd-session[252542]: Connection closed by invalid user debian 156.227.233.86 port 54846 [preauth]
Jan 27 19:34:31 compute-0 sshd-session[252544]: Invalid user debian from 156.227.233.86 port 41988
Jan 27 19:34:31 compute-0 openstack_network_exporter[204477]: ERROR   19:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:34:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:34:31 compute-0 openstack_network_exporter[204477]: ERROR   19:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:34:31 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:34:31 compute-0 sshd-session[252544]: Connection closed by invalid user debian 156.227.233.86 port 41988 [preauth]
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.106 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.107 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f75d8c1a5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f75d8c19f10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f75db131370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.119 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f79ddcc5-ee21-43e8-9d0d-60476a477361', 'name': 'tempest-ServerActionsTestJSON-server-1826919436', 'flavor': {'id': '49f81b8c-e0df-4a53-87c6-69576be59651', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '50b0e23834964280a34973a87d80d1b8', 'user_id': '79532101c66342a980a90799ac41a442', 'hostId': 'b1181980f97b1ec08e2bf869b7f7812beda402ece77f78093638d8d3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.123 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '14a5dad2-3e22-42a5-bd6e-7255c6b09d8f', 'name': 'tempest-TestServerBasicOps-server-252932369', 'flavor': {'id': '49f81b8c-e0df-4a53-87c6-69576be59651', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '729797c6-2677-44bd-a4a8-949d1f57b0a2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '2b8690906d754ad4b5878d33231c97f9', 'user_id': 'd57ebe90e53d40899a8b3f3ce873df18', 'hostId': '8d5ef883a6ce7ade6fef1c5befa43997991af0ec42f03bd558e08f8b', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.124 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1b020>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T19:34:32.124820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.147 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.148 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.172 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.172 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.173 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f75d8c18860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.173 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T19:34:32.173695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.203 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/memory.usage volume: 42.6796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.225 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/memory.usage volume: 42.625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.226 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.226 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f75d8c19070>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.226 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c190a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.227 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.227 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T19:34:32.227024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.227 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.227 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.228 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f75d8c188f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.228 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c188c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T19:34:32.228477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.234 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.239 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.240 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.240 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f75d8c18470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.240 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.241 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.241 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75daab5a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T19:34:32.241226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.241 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.298 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.requests volume: 1094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.299 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.357 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.requests volume: 1120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.358 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f75d8c19cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.359 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.360 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.360 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f75d8c19be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.360 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.361 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.361 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T19:34:32.360011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f75d8c1afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T19:34:32.361001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.361 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.362 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.362 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.362 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.362 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f75d8c1a5a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.363 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8dd6bd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T19:34:32.362200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.363 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/cpu volume: 38680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.364 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/cpu volume: 36920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f75d8c182f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.365 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T19:34:32.363891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c183e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.365 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.bytes volume: 30530048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.365 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.365 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.bytes volume: 30800384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.366 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.366 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f75d8c18110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.366 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.367 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.367 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f75d8c18410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.368 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.368 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.latency volume: 1248334478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.368 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.read.latency volume: 99676992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.369 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.latency volume: 2362261547 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.369 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.read.latency volume: 236045790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f75d8c19e80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.369 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.370 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19c40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T19:34:32.365213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.370 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T19:34:32.367107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T19:34:32.368282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.370 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f75d8c19c10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.371 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.371 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.371 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f75d8c184a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.372 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T19:34:32.370364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c184d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.373 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.373 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T19:34:32.371519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.373 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T19:34:32.372917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.373 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f75d8c19ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.374 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.374 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.375 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f75d8c18530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.375 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T19:34:32.374819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.376 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.latency volume: 7183658627 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.376 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.376 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.latency volume: 22342975346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.377 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f75d8c19d60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T19:34:32.376107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.378 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c19d90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.379 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.379 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.379 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f75d8c18590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.380 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c185c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T19:34:32.379377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T19:34:32.381001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.381 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.requests volume: 327 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.381 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.381 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.requests volume: 324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.381 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f75d8c185f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f75d8c19df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T19:34:32.382866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.383 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f75d8c18e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.384 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.384 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.384 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T19:34:32.384349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f75d8c18650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.385 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c18680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f75d8c18500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.386 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75d8c1a690>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.387 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.bytes volume: 73089024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.387 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.387 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.387 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f75dad48950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.388 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f75da2d26f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T19:34:32.386140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.389 14 DEBUG ceilometer.compute.pollsters [-] f79ddcc5-ee21-43e8-9d0d-60476a477361/network.incoming.bytes volume: 4475 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.389 14 DEBUG ceilometer.compute.pollsters [-] 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f75d8c18920>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f75d8d83c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.389 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T19:34:32.387041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T19:34:32.388719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.395 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 ceilometer_agent_compute[195184]: 2026-01-27 19:34:32.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 19:34:32 compute-0 sshd-session[252546]: Invalid user debian from 156.227.233.86 port 57662
Jan 27 19:34:32 compute-0 sshd-session[252546]: Connection closed by invalid user debian 156.227.233.86 port 57662 [preauth]
Jan 27 19:34:32 compute-0 nova_compute[185480]: 2026-01-27 19:34:32.869 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:33 compute-0 sshd-session[252549]: Invalid user debian from 156.227.233.86 port 45888
Jan 27 19:34:33 compute-0 nova_compute[185480]: 2026-01-27 19:34:33.519 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:33 compute-0 podman[252551]: 2026-01-27 19:34:33.599976566 +0000 UTC m=+0.123451654 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_compute)
Jan 27 19:34:33 compute-0 sshd-session[252549]: Connection closed by invalid user debian 156.227.233.86 port 45888 [preauth]
Jan 27 19:34:34 compute-0 sshd-session[252571]: Invalid user debian from 156.227.233.86 port 57294
Jan 27 19:34:34 compute-0 sshd-session[252571]: Connection closed by invalid user debian 156.227.233.86 port 57294 [preauth]
Jan 27 19:34:35 compute-0 sshd-session[252573]: Invalid user debian from 156.227.233.86 port 41224
Jan 27 19:34:35 compute-0 sshd-session[252573]: Connection closed by invalid user debian 156.227.233.86 port 41224 [preauth]
Jan 27 19:34:36 compute-0 sshd-session[252575]: Invalid user debian from 156.227.233.86 port 55306
Jan 27 19:34:36 compute-0 podman[252579]: 2026-01-27 19:34:36.637357231 +0000 UTC m=+0.070956895 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 19:34:36 compute-0 podman[252577]: 2026-01-27 19:34:36.650542678 +0000 UTC m=+0.090634743 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 19:34:36 compute-0 sshd-session[252575]: Connection closed by invalid user debian 156.227.233.86 port 55306 [preauth]
Jan 27 19:34:36 compute-0 podman[252578]: 2026-01-27 19:34:36.72706006 +0000 UTC m=+0.158222265 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 27 19:34:37 compute-0 sshd-session[252643]: Invalid user debian from 156.227.233.86 port 43734
Jan 27 19:34:37 compute-0 sshd-session[252643]: Connection closed by invalid user debian 156.227.233.86 port 43734 [preauth]
Jan 27 19:34:37 compute-0 nova_compute[185480]: 2026-01-27 19:34:37.873 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:38 compute-0 nova_compute[185480]: 2026-01-27 19:34:38.525 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:38 compute-0 sshd-session[252645]: Invalid user debian from 156.227.233.86 port 57708
Jan 27 19:34:38 compute-0 sshd-session[252645]: Connection closed by invalid user debian 156.227.233.86 port 57708 [preauth]
Jan 27 19:34:39 compute-0 sshd-session[252647]: Invalid user debian from 156.227.233.86 port 44082
Jan 27 19:34:39 compute-0 sshd-session[252647]: Connection closed by invalid user debian 156.227.233.86 port 44082 [preauth]
Jan 27 19:34:40 compute-0 sshd-session[252649]: Invalid user debian from 156.227.233.86 port 60662
Jan 27 19:34:40 compute-0 sshd-session[252649]: Connection closed by invalid user debian 156.227.233.86 port 60662 [preauth]
Jan 27 19:34:41 compute-0 sshd-session[252651]: Invalid user debian from 156.227.233.86 port 46838
Jan 27 19:34:41 compute-0 sshd-session[252651]: Connection closed by invalid user debian 156.227.233.86 port 46838 [preauth]
Jan 27 19:34:42 compute-0 sshd-session[252653]: Invalid user debian from 156.227.233.86 port 35974
Jan 27 19:34:42 compute-0 sshd-session[252653]: Connection closed by invalid user debian 156.227.233.86 port 35974 [preauth]
Jan 27 19:34:42 compute-0 nova_compute[185480]: 2026-01-27 19:34:42.877 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:43 compute-0 nova_compute[185480]: 2026-01-27 19:34:43.528 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:43 compute-0 sshd-session[252655]: Invalid user debian from 156.227.233.86 port 52458
Jan 27 19:34:43 compute-0 sshd-session[252655]: Connection closed by invalid user debian 156.227.233.86 port 52458 [preauth]
Jan 27 19:34:43 compute-0 sshd-session[252657]: Invalid user sol from 45.148.10.240 port 44038
Jan 27 19:34:43 compute-0 sshd-session[252657]: Connection closed by invalid user sol 45.148.10.240 port 44038 [preauth]
Jan 27 19:34:44 compute-0 sshd-session[252659]: Invalid user debian from 156.227.233.86 port 40360
Jan 27 19:34:44 compute-0 sshd-session[252659]: Connection closed by invalid user debian 156.227.233.86 port 40360 [preauth]
Jan 27 19:34:45 compute-0 sshd-session[252661]: Invalid user debian from 156.227.233.86 port 54064
Jan 27 19:34:45 compute-0 sshd-session[252661]: Connection closed by invalid user debian 156.227.233.86 port 54064 [preauth]
Jan 27 19:34:46 compute-0 sshd-session[252663]: Invalid user debian from 156.227.233.86 port 40332
Jan 27 19:34:46 compute-0 sshd-session[252663]: Connection closed by invalid user debian 156.227.233.86 port 40332 [preauth]
Jan 27 19:34:47 compute-0 nova_compute[185480]: 2026-01-27 19:34:47.881 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:48 compute-0 sshd-session[252665]: Invalid user debian from 156.227.233.86 port 55398
Jan 27 19:34:48 compute-0 sshd-session[252665]: Connection closed by invalid user debian 156.227.233.86 port 55398 [preauth]
Jan 27 19:34:48 compute-0 nova_compute[185480]: 2026-01-27 19:34:48.531 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:49 compute-0 sshd-session[252667]: Invalid user debian from 156.227.233.86 port 46848
Jan 27 19:34:49 compute-0 podman[252669]: 2026-01-27 19:34:49.271020407 +0000 UTC m=+0.111301053 container health_status 87adcdf3e8cc2fd658289925f128d2de87edd3263248accfad2af07ae04edff3 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 19:34:49 compute-0 sshd-session[252667]: Connection closed by invalid user debian 156.227.233.86 port 46848 [preauth]
Jan 27 19:34:49 compute-0 podman[252671]: 2026-01-27 19:34:49.305736856 +0000 UTC m=+0.130517599 container health_status b518a81ac2094944b422c365331c41645f8c449c513cf26b456377a7509f5d8f (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler)
Jan 27 19:34:49 compute-0 podman[252670]: 2026-01-27 19:34:49.309041917 +0000 UTC m=+0.144981106 container health_status a7bdf1a5968d03f9ef241da1209a33dad19b743896e4ecfabc5ee4a8c535446d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 27 19:34:50 compute-0 sshd-session[252727]: Invalid user debian from 156.227.233.86 port 58700
Jan 27 19:34:50 compute-0 sshd-session[252727]: Connection closed by invalid user debian 156.227.233.86 port 58700 [preauth]
Jan 27 19:34:51 compute-0 sshd-session[252730]: Invalid user debian from 156.227.233.86 port 47456
Jan 27 19:34:51 compute-0 sshd-session[252730]: Connection closed by invalid user debian 156.227.233.86 port 47456 [preauth]
Jan 27 19:34:51 compute-0 sshd-session[252734]: Accepted publickey for zuul from 192.168.122.10 port 37134 ssh2: ECDSA SHA256:FvlJ6BSQ602412P7FBGSwdCRWF1lLNoEa3f3v2hblls
Jan 27 19:34:51 compute-0 systemd-logind[795]: New session 31 of user zuul.
Jan 27 19:34:51 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 27 19:34:52 compute-0 sshd-session[252734]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 19:34:52 compute-0 sudo[252738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 27 19:34:52 compute-0 sudo[252738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 19:34:52 compute-0 sshd-session[252732]: Invalid user debian from 156.227.233.86 port 33810
Jan 27 19:34:52 compute-0 sshd-session[252732]: Connection closed by invalid user debian 156.227.233.86 port 33810 [preauth]
Jan 27 19:34:52 compute-0 nova_compute[185480]: 2026-01-27 19:34:52.893 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:53 compute-0 sshd-session[252772]: Invalid user debian from 156.227.233.86 port 47696
Jan 27 19:34:53 compute-0 sshd-session[252772]: Connection closed by invalid user debian 156.227.233.86 port 47696 [preauth]
Jan 27 19:34:53 compute-0 nova_compute[185480]: 2026-01-27 19:34:53.536 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:54 compute-0 sshd-session[252800]: Invalid user debian from 156.227.233.86 port 34882
Jan 27 19:34:54 compute-0 sshd-session[252800]: Connection closed by invalid user debian 156.227.233.86 port 34882 [preauth]
Jan 27 19:34:55 compute-0 sshd-session[252851]: Invalid user debian from 156.227.233.86 port 50242
Jan 27 19:34:55 compute-0 sshd-session[252851]: Connection closed by invalid user debian 156.227.233.86 port 50242 [preauth]
Jan 27 19:34:56 compute-0 sshd-session[252886]: Invalid user debian from 156.227.233.86 port 40404
Jan 27 19:34:56 compute-0 sshd-session[252886]: Connection closed by invalid user debian 156.227.233.86 port 40404 [preauth]
Jan 27 19:34:57 compute-0 sshd-session[252891]: Invalid user debian from 156.227.233.86 port 60866
Jan 27 19:34:57 compute-0 sshd-session[252891]: Connection closed by invalid user debian 156.227.233.86 port 60866 [preauth]
Jan 27 19:34:57 compute-0 ovs-vsctl[252927]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 19:34:57 compute-0 nova_compute[185480]: 2026-01-27 19:34:57.897 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:58 compute-0 sshd-session[252923]: Invalid user debian from 156.227.233.86 port 48854
Jan 27 19:34:58 compute-0 nova_compute[185480]: 2026-01-27 19:34:58.540 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:34:58 compute-0 podman[252964]: 2026-01-27 19:34:58.561096932 +0000 UTC m=+0.123522506 container health_status 2ae51d4657265378391efedaaebacaae22f5e298bd01c0ca795b004999f831f7 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Jan 27 19:34:58 compute-0 sshd-session[252923]: Connection closed by invalid user debian 156.227.233.86 port 48854 [preauth]
Jan 27 19:34:58 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 252762 (sos)
Jan 27 19:34:58 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 27 19:34:58 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 27 19:34:59 compute-0 virtqemud[185201]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 19:34:59 compute-0 virtqemud[185201]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 19:34:59 compute-0 sshd-session[252995]: Invalid user debian from 156.227.233.86 port 33660
Jan 27 19:34:59 compute-0 virtqemud[185201]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 19:34:59 compute-0 sshd-session[252995]: Connection closed by invalid user debian 156.227.233.86 port 33660 [preauth]
Jan 27 19:34:59 compute-0 podman[201378]: time="2026-01-27T19:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 19:34:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 27 19:34:59 compute-0 podman[201378]: @ - - [27/Jan/2026:19:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4850 "" "Go-http-client/1.1"
Jan 27 19:35:00 compute-0 sshd-session[253115]: Invalid user debian from 156.227.233.86 port 49182
Jan 27 19:35:00 compute-0 nova_compute[185480]: 2026-01-27 19:35:00.515 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:35:00 compute-0 nova_compute[185480]: 2026-01-27 19:35:00.516 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 19:35:00 compute-0 sshd-session[253115]: Connection closed by invalid user debian 156.227.233.86 port 49182 [preauth]
Jan 27 19:35:01 compute-0 crontab[253370]: (root) LIST (root)
Jan 27 19:35:01 compute-0 sshd-session[253314]: Invalid user debian from 156.227.233.86 port 35856
Jan 27 19:35:01 compute-0 nova_compute[185480]: 2026-01-27 19:35:01.393 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquiring lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 19:35:01 compute-0 nova_compute[185480]: 2026-01-27 19:35:01.394 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Acquired lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 19:35:01 compute-0 nova_compute[185480]: 2026-01-27 19:35:01.394 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 19:35:01 compute-0 openstack_network_exporter[204477]: ERROR   19:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 19:35:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:35:01 compute-0 openstack_network_exporter[204477]: ERROR   19:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 19:35:01 compute-0 openstack_network_exporter[204477]: 
Jan 27 19:35:01 compute-0 sshd-session[253314]: Connection closed by invalid user debian 156.227.233.86 port 35856 [preauth]
Jan 27 19:35:02 compute-0 sshd-session[253427]: Invalid user debian from 156.227.233.86 port 48168
Jan 27 19:35:02 compute-0 sshd-session[253427]: Connection closed by invalid user debian 156.227.233.86 port 48168 [preauth]
Jan 27 19:35:02 compute-0 nova_compute[185480]: 2026-01-27 19:35:02.903 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:35:03 compute-0 sshd-session[253467]: Invalid user debian from 156.227.233.86 port 34920
Jan 27 19:35:03 compute-0 nova_compute[185480]: 2026-01-27 19:35:03.542 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:35:03 compute-0 sshd-session[253467]: Connection closed by invalid user debian 156.227.233.86 port 34920 [preauth]
Jan 27 19:35:03 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 19:35:04 compute-0 podman[253500]: 2026-01-27 19:35:04.006923237 +0000 UTC m=+0.066670960 container health_status a73297cb21bbf1efec9f52f56ba9c01ce113a73e160e41d2cf32e42cc00db5c4 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:35:04 compute-0 systemd[1]: Started Hostname Service.
Jan 27 19:35:04 compute-0 sshd-session[253476]: Invalid user debian from 156.227.233.86 port 52034
Jan 27 19:35:04 compute-0 sshd-session[253476]: Connection closed by invalid user debian 156.227.233.86 port 52034 [preauth]
Jan 27 19:35:05 compute-0 sshd-session[253557]: Invalid user debian from 156.227.233.86 port 40510
Jan 27 19:35:05 compute-0 sshd-session[253557]: Connection closed by invalid user debian 156.227.233.86 port 40510 [preauth]
Jan 27 19:35:06 compute-0 nova_compute[185480]: 2026-01-27 19:35:06.351 185484 DEBUG nova.network.neutron [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Updating instance_info_cache with network_info: [{"id": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "address": "fa:16:3e:0c:85:f5", "network": {"id": "4edcfe4e-277a-432a-a139-0a06cca1f6d2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-587329447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b8690906d754ad4b5878d33231c97f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2976aaab-c7", "ovs_interfaceid": "2976aaab-c73e-4d12-88b9-4a36da5c35e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 19:35:06 compute-0 sshd-session[253639]: Invalid user debian from 156.227.233.86 port 55630
Jan 27 19:35:06 compute-0 sshd-session[253639]: Connection closed by invalid user debian 156.227.233.86 port 55630 [preauth]
Jan 27 19:35:06 compute-0 nova_compute[185480]: 2026-01-27 19:35:06.812 185484 DEBUG oslo_concurrency.lockutils [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Releasing lock "refresh_cache-14a5dad2-3e22-42a5-bd6e-7255c6b09d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 19:35:06 compute-0 nova_compute[185480]: 2026-01-27 19:35:06.812 185484 DEBUG nova.compute.manager [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] [instance: 14a5dad2-3e22-42a5-bd6e-7255c6b09d8f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 19:35:06 compute-0 nova_compute[185480]: 2026-01-27 19:35:06.812 185484 DEBUG oslo_service.periodic_task [None req-e7b3cd1a-bf7e-408e-8874-94509255224c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 19:35:06 compute-0 podman[253814]: 2026-01-27 19:35:06.88494892 +0000 UTC m=+0.094290273 container health_status 2085216dd40c2e0c16e582c26ffe887639c23a322ae6450a1263d78b593b7e39 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 19:35:06 compute-0 podman[253816]: 2026-01-27 19:35:06.919987567 +0000 UTC m=+0.113703543 container health_status fef358d2f14d994b39180a79319aacd6b2554f6ccae6a7434dd794e858503943 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 19:35:06 compute-0 podman[253815]: 2026-01-27 19:35:06.942962046 +0000 UTC m=+0.149647613 container health_status 94aba06993eafc07b218960d9b87ff9c0150cd81ff47f08acafb4b426870861b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21ed49c6e90ec8fa1d0f8342a21617fba853fc16560040ad820113c1afa0a861-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699-62d6c5aded3dd15dca290e0f4be27451cb8703928854402003267de28e4bd699'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 19:35:07 compute-0 sshd-session[253810]: Invalid user debian from 156.227.233.86 port 42666
Jan 27 19:35:07 compute-0 sshd-session[253810]: Connection closed by invalid user debian 156.227.233.86 port 42666 [preauth]
Jan 27 19:35:07 compute-0 nova_compute[185480]: 2026-01-27 19:35:07.905 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:35:08 compute-0 sshd-session[253936]: Invalid user debian from 156.227.233.86 port 59012
Jan 27 19:35:08 compute-0 nova_compute[185480]: 2026-01-27 19:35:08.544 185484 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 19:35:08 compute-0 sshd-session[253936]: Connection closed by invalid user debian 156.227.233.86 port 59012 [preauth]
Jan 27 19:35:09 compute-0 sshd-session[253991]: Invalid user debian from 156.227.233.86 port 45982
Jan 27 19:35:09 compute-0 sshd-session[253991]: Connection closed by invalid user debian 156.227.233.86 port 45982 [preauth]
